00:00:00.000 Started by upstream project "autotest-per-patch" build number 127170 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24317 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.051 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.764 The recommended git tool is: git 00:00:02.764 using credential 00000000-0000-0000-0000-000000000002 00:00:02.766 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.778 Fetching changes from the remote Git repository 00:00:02.779 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.789 Using shallow fetch with depth 1 00:00:02.789 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.789 > git --version # timeout=10 00:00:02.802 > git --version # 'git version 2.39.2' 00:00:02.802 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.813 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.813 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/32/24332/2 # timeout=5 00:00:07.831 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.843 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.853 Checking out Revision 1410c9c474f7ce6874b6ec6ac44d331a6633148e (FETCH_HEAD) 00:00:07.853 > git config core.sparsecheckout # timeout=10 00:00:07.864 > git read-tree -mu HEAD # timeout=10 00:00:07.879 > git checkout -f 1410c9c474f7ce6874b6ec6ac44d331a6633148e # timeout=5 00:00:07.900 Commit message: "jjb/autotest: add SPDK_TEST_RAID flag for docker-autotest jobs" 00:00:07.900 > git rev-list --no-walk c820853bed186e0165696f7328ede63422cfa1d9 # timeout=10 00:00:07.997 [Pipeline] Start of Pipeline 00:00:08.013 [Pipeline] library 00:00:08.015 Loading library shm_lib@master 00:00:08.015 Library shm_lib@master is cached. Copying from home. 00:00:08.030 [Pipeline] node 00:00:23.032 Still waiting to schedule task 00:00:23.032 Waiting for next available executor on ‘vagrant-vm-host’ 00:00:40.726 Running on VM-host-SM16 in /var/jenkins/workspace/nvme-vg-autotest_3 00:00:40.728 [Pipeline] { 00:00:40.740 [Pipeline] catchError 00:00:40.742 [Pipeline] { 00:00:40.755 [Pipeline] wrap 00:00:40.764 [Pipeline] { 00:00:40.772 [Pipeline] stage 00:00:40.774 [Pipeline] { (Prologue) 00:00:40.794 [Pipeline] echo 00:00:40.796 Node: VM-host-SM16 00:00:40.801 [Pipeline] cleanWs 00:00:40.808 [WS-CLEANUP] Deleting project workspace... 00:00:40.808 [WS-CLEANUP] Deferred wipeout is used... 00:00:40.814 [WS-CLEANUP] done 00:00:40.973 [Pipeline] setCustomBuildProperty 00:00:41.053 [Pipeline] httpRequest 00:00:41.074 [Pipeline] echo 00:00:41.076 Sorcerer 10.211.164.101 is alive 00:00:41.084 [Pipeline] httpRequest 00:00:41.087 HttpMethod: GET 00:00:41.088 URL: http://10.211.164.101/packages/jbp_1410c9c474f7ce6874b6ec6ac44d331a6633148e.tar.gz 00:00:41.088 Sending request to url: http://10.211.164.101/packages/jbp_1410c9c474f7ce6874b6ec6ac44d331a6633148e.tar.gz 00:00:41.107 Response Code: HTTP/1.1 200 OK 00:00:41.107 Success: Status code 200 is in the accepted range: 200,404 00:00:41.107 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_1410c9c474f7ce6874b6ec6ac44d331a6633148e.tar.gz 00:00:44.940 [Pipeline] sh 00:00:45.219 + tar --no-same-owner -xf jbp_1410c9c474f7ce6874b6ec6ac44d331a6633148e.tar.gz 00:00:45.234 [Pipeline] httpRequest 00:00:45.255 [Pipeline] echo 00:00:45.257 Sorcerer 10.211.164.101 is alive 00:00:45.266 [Pipeline] httpRequest 00:00:45.270 HttpMethod: GET 00:00:45.270 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:45.271 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:45.276 Response Code: HTTP/1.1 200 OK 00:00:45.276 Success: Status code 200 is in the accepted range: 200,404 00:00:45.277 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:54.700 [Pipeline] sh 00:01:54.973 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:58.269 [Pipeline] sh 00:01:58.548 + git -C spdk log --oneline -n5 00:01:58.548 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:58.548 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:58.548 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:58.548 d005e023b raid: fix empty slot not updated in sb after resize 00:01:58.548 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:58.567 [Pipeline] writeFile 00:01:58.583 [Pipeline] sh 00:01:58.866 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:58.877 [Pipeline] sh 00:01:59.158 + cat autorun-spdk.conf 00:01:59.159 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.159 SPDK_TEST_NVME=1 00:01:59.159 SPDK_TEST_FTL=1 00:01:59.159 SPDK_TEST_ISAL=1 00:01:59.159 SPDK_RUN_ASAN=1 00:01:59.159 SPDK_RUN_UBSAN=1 00:01:59.159 SPDK_TEST_XNVME=1 00:01:59.159 SPDK_TEST_NVME_FDP=1 00:01:59.159 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.165 RUN_NIGHTLY=0 00:01:59.167 [Pipeline] } 00:01:59.184 [Pipeline] // stage 00:01:59.199 [Pipeline] stage 00:01:59.201 [Pipeline] { (Run VM) 00:01:59.217 [Pipeline] sh 00:01:59.499 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:59.499 + echo 'Start stage prepare_nvme.sh' 00:01:59.499 Start stage prepare_nvme.sh 00:01:59.499 + [[ -n 6 ]] 00:01:59.499 + disk_prefix=ex6 00:01:59.499 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:01:59.499 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:01:59.499 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:01:59.499 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.499 ++ SPDK_TEST_NVME=1 00:01:59.499 ++ SPDK_TEST_FTL=1 00:01:59.499 ++ SPDK_TEST_ISAL=1 00:01:59.499 ++ SPDK_RUN_ASAN=1 00:01:59.499 ++ SPDK_RUN_UBSAN=1 00:01:59.499 ++ SPDK_TEST_XNVME=1 00:01:59.499 ++ SPDK_TEST_NVME_FDP=1 00:01:59.499 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.499 ++ RUN_NIGHTLY=0 00:01:59.499 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:01:59.499 + nvme_files=() 00:01:59.499 + declare -A nvme_files 00:01:59.499 + backend_dir=/var/lib/libvirt/images/backends 00:01:59.499 + nvme_files['nvme.img']=5G 00:01:59.499 + nvme_files['nvme-cmb.img']=5G 00:01:59.499 + nvme_files['nvme-multi0.img']=4G 00:01:59.499 + nvme_files['nvme-multi1.img']=4G 00:01:59.499 + nvme_files['nvme-multi2.img']=4G 00:01:59.499 + nvme_files['nvme-openstack.img']=8G 00:01:59.499 + nvme_files['nvme-zns.img']=5G 00:01:59.499 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:59.499 + (( SPDK_TEST_FTL == 1 )) 00:01:59.499 + nvme_files["nvme-ftl.img"]=6G 00:01:59.499 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:59.499 + nvme_files["nvme-fdp.img"]=1G 00:01:59.499 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:59.499 + for nvme in "${!nvme_files[@]}" 00:01:59.499 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:59.499 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.499 + for nvme in "${!nvme_files[@]}" 00:01:59.499 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:02:00.433 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:00.433 + for nvme in "${!nvme_files[@]}" 00:02:00.433 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:02:00.433 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.433 + for nvme in "${!nvme_files[@]}" 00:02:00.433 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:02:00.433 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:00.433 + for nvme in "${!nvme_files[@]}" 00:02:00.433 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:02:01.367 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.367 + for nvme in "${!nvme_files[@]}" 00:02:01.367 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:02:01.367 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.367 + for nvme in "${!nvme_files[@]}" 00:02:01.367 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:02:01.367 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.367 + for nvme in "${!nvme_files[@]}" 00:02:01.367 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:02:01.367 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:01.367 + for nvme in "${!nvme_files[@]}" 00:02:01.367 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:02:02.301 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:02.301 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:02:02.301 + echo 'End stage prepare_nvme.sh' 00:02:02.301 End stage prepare_nvme.sh 00:02:02.313 [Pipeline] sh 00:02:02.590 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:02.590 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:02:02.590 00:02:02.590 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:02:02.590 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:02:02.590 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:02:02.590 HELP=0 00:02:02.590 DRY_RUN=0 00:02:02.590 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:02:02.590 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:02.590 NVME_AUTO_CREATE=0 00:02:02.590 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:02:02.590 NVME_CMB=,,,, 00:02:02.590 NVME_PMR=,,,, 00:02:02.590 NVME_ZNS=,,,, 00:02:02.590 NVME_MS=true,,,, 00:02:02.590 NVME_FDP=,,,on, 00:02:02.590 SPDK_VAGRANT_DISTRO=fedora38 00:02:02.590 SPDK_VAGRANT_VMCPU=10 00:02:02.590 SPDK_VAGRANT_VMRAM=12288 00:02:02.590 SPDK_VAGRANT_PROVIDER=libvirt 00:02:02.590 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:02.590 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:02.590 SPDK_OPENSTACK_NETWORK=0 00:02:02.590 VAGRANT_PACKAGE_BOX=0 00:02:02.590 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:02:02.590 FORCE_DISTRO=true 00:02:02.590 VAGRANT_BOX_VERSION= 00:02:02.590 EXTRA_VAGRANTFILES= 00:02:02.590 NIC_MODEL=e1000 00:02:02.590 00:02:02.590 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt' 00:02:02.590 /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:02:05.927 Bringing machine 'default' up with 'libvirt' provider... 00:02:06.861 ==> default: Creating image (snapshot of base box volume). 00:02:06.861 ==> default: Creating domain with the following settings... 00:02:06.861 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721906825_08074cfae6f49a6e7167 00:02:06.861 ==> default: -- Domain type: kvm 00:02:06.861 ==> default: -- Cpus: 10 00:02:06.861 ==> default: -- Feature: acpi 00:02:06.861 ==> default: -- Feature: apic 00:02:06.861 ==> default: -- Feature: pae 00:02:06.861 ==> default: -- Memory: 12288M 00:02:06.861 ==> default: -- Memory Backing: hugepages: 00:02:06.861 ==> default: -- Management MAC: 00:02:06.861 ==> default: -- Loader: 00:02:06.861 ==> default: -- Nvram: 00:02:06.861 ==> default: -- Base box: spdk/fedora38 00:02:06.861 ==> default: -- Storage pool: default 00:02:06.861 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721906825_08074cfae6f49a6e7167.img (20G) 00:02:06.861 ==> default: -- Volume Cache: default 00:02:06.861 ==> default: -- Kernel: 00:02:06.861 ==> default: -- Initrd: 00:02:06.861 ==> default: -- Graphics Type: vnc 00:02:06.861 ==> default: -- Graphics Port: -1 00:02:06.861 ==> default: -- Graphics IP: 127.0.0.1 00:02:06.861 ==> default: -- Graphics Password: Not defined 00:02:06.861 ==> default: -- Video Type: cirrus 00:02:06.861 ==> default: -- Video VRAM: 9216 00:02:06.861 ==> default: -- Sound Type: 00:02:06.861 ==> default: -- Keymap: en-us 00:02:06.861 ==> default: -- TPM Path: 00:02:06.861 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:06.861 ==> default: -- Command line args: 00:02:06.861 ==> default: -> value=-device, 00:02:06.861 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:06.861 ==> default: -> value=-drive, 00:02:06.861 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:06.861 ==> default: -> value=-device, 00:02:06.861 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:06.861 ==> default: -> value=-device, 00:02:06.861 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:06.861 ==> default: -> value=-drive, 00:02:06.861 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:02:06.861 ==> default: -> value=-device, 00:02:06.861 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.861 ==> default: -> value=-device, 00:02:06.861 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:06.861 ==> default: -> value=-drive, 00:02:06.862 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:06.862 ==> default: -> value=-device, 00:02:06.862 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.862 ==> default: -> value=-drive, 00:02:06.862 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:06.862 ==> default: -> value=-device, 00:02:06.862 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.862 ==> default: -> value=-drive, 00:02:06.862 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:06.862 ==> default: -> value=-device, 00:02:06.862 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.862 ==> default: -> value=-device, 00:02:06.862 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:06.862 ==> default: -> value=-device, 00:02:06.862 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:06.862 ==> default: -> value=-drive, 00:02:06.862 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:06.862 ==> default: -> value=-device, 00:02:06.862 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:07.134 ==> default: Creating shared folders metadata... 00:02:07.134 ==> default: Starting domain. 00:02:09.029 ==> default: Waiting for domain to get an IP address... 00:02:27.140 ==> default: Waiting for SSH to become available... 00:02:27.141 ==> default: Configuring and enabling network interfaces... 00:02:30.418 default: SSH address: 192.168.121.157:22 00:02:30.418 default: SSH username: vagrant 00:02:30.418 default: SSH auth method: private key 00:02:32.948 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:41.113 ==> default: Mounting SSHFS shared folder... 00:02:42.487 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:42.487 ==> default: Checking Mount.. 00:02:43.422 ==> default: Folder Successfully Mounted! 00:02:43.422 ==> default: Running provisioner: file... 00:02:44.354 default: ~/.gitconfig => .gitconfig 00:02:44.612 00:02:44.612 SUCCESS! 00:02:44.612 00:02:44.612 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt and type "vagrant ssh" to use. 00:02:44.612 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:44.612 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt" to destroy all trace of vm. 00:02:44.612 00:02:44.620 [Pipeline] } 00:02:44.636 [Pipeline] // stage 00:02:44.644 [Pipeline] dir 00:02:44.644 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt 00:02:44.646 [Pipeline] { 00:02:44.656 [Pipeline] catchError 00:02:44.657 [Pipeline] { 00:02:44.667 [Pipeline] sh 00:02:44.943 + vagrant ssh-config --host vagrant 00:02:44.943 + sed -ne /^Host/,$p 00:02:44.943 + tee ssh_conf 00:02:49.127 Host vagrant 00:02:49.127 HostName 192.168.121.157 00:02:49.127 User vagrant 00:02:49.127 Port 22 00:02:49.127 UserKnownHostsFile /dev/null 00:02:49.127 StrictHostKeyChecking no 00:02:49.127 PasswordAuthentication no 00:02:49.127 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:49.127 IdentitiesOnly yes 00:02:49.127 LogLevel FATAL 00:02:49.127 ForwardAgent yes 00:02:49.127 ForwardX11 yes 00:02:49.127 00:02:49.139 [Pipeline] withEnv 00:02:49.141 [Pipeline] { 00:02:49.154 [Pipeline] sh 00:02:49.430 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:49.430 source /etc/os-release 00:02:49.430 [[ -e /image.version ]] && img=$(< /image.version) 00:02:49.430 # Minimal, systemd-like check. 00:02:49.430 if [[ -e /.dockerenv ]]; then 00:02:49.430 # Clear garbage from the node's name: 00:02:49.430 # agt-er_autotest_547-896 -> autotest_547-896 00:02:49.430 # $HOSTNAME is the actual container id 00:02:49.430 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:49.430 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:49.430 # We can assume this is a mount from a host where container is running, 00:02:49.430 # so fetch its hostname to easily identify the target swarm worker. 00:02:49.431 container="$(< /etc/hostname) ($agent)" 00:02:49.431 else 00:02:49.431 # Fallback 00:02:49.431 container=$agent 00:02:49.431 fi 00:02:49.431 fi 00:02:49.431 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:49.431 00:02:49.699 [Pipeline] } 00:02:49.716 [Pipeline] // withEnv 00:02:49.724 [Pipeline] setCustomBuildProperty 00:02:49.736 [Pipeline] stage 00:02:49.739 [Pipeline] { (Tests) 00:02:49.755 [Pipeline] sh 00:02:50.074 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:50.344 [Pipeline] sh 00:02:50.624 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:50.893 [Pipeline] timeout 00:02:50.894 Timeout set to expire in 40 min 00:02:50.895 [Pipeline] { 00:02:50.909 [Pipeline] sh 00:02:51.187 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:51.753 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:02:51.765 [Pipeline] sh 00:02:52.044 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:52.314 [Pipeline] sh 00:02:52.611 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:52.628 [Pipeline] sh 00:02:52.908 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:53.167 ++ readlink -f spdk_repo 00:02:53.167 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:53.167 + [[ -n /home/vagrant/spdk_repo ]] 00:02:53.167 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:53.167 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:53.167 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:53.167 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:53.167 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:53.167 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:53.167 + cd /home/vagrant/spdk_repo 00:02:53.167 + source /etc/os-release 00:02:53.167 ++ NAME='Fedora Linux' 00:02:53.167 ++ VERSION='38 (Cloud Edition)' 00:02:53.167 ++ ID=fedora 00:02:53.167 ++ VERSION_ID=38 00:02:53.167 ++ VERSION_CODENAME= 00:02:53.167 ++ PLATFORM_ID=platform:f38 00:02:53.167 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:53.167 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:53.167 ++ LOGO=fedora-logo-icon 00:02:53.167 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:53.167 ++ HOME_URL=https://fedoraproject.org/ 00:02:53.167 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:53.167 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:53.167 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:53.167 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:53.167 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:53.167 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:53.167 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:53.167 ++ SUPPORT_END=2024-05-14 00:02:53.167 ++ VARIANT='Cloud Edition' 00:02:53.167 ++ VARIANT_ID=cloud 00:02:53.167 + uname -a 00:02:53.167 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:53.167 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:53.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:53.683 Hugepages 00:02:53.683 node hugesize free / total 00:02:53.683 node0 1048576kB 0 / 0 00:02:53.683 node0 2048kB 0 / 0 00:02:53.683 00:02:53.683 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.940 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:53.940 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:53.940 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:53.940 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:53.940 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:53.940 + rm -f /tmp/spdk-ld-path 00:02:53.940 + source autorun-spdk.conf 00:02:53.940 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.940 ++ SPDK_TEST_NVME=1 00:02:53.940 ++ SPDK_TEST_FTL=1 00:02:53.940 ++ SPDK_TEST_ISAL=1 00:02:53.940 ++ SPDK_RUN_ASAN=1 00:02:53.940 ++ SPDK_RUN_UBSAN=1 00:02:53.940 ++ SPDK_TEST_XNVME=1 00:02:53.940 ++ SPDK_TEST_NVME_FDP=1 00:02:53.940 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.940 ++ RUN_NIGHTLY=0 00:02:53.940 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:53.940 + [[ -n '' ]] 00:02:53.940 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:53.940 + for M in /var/spdk/build-*-manifest.txt 00:02:53.940 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:53.940 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.940 + for M in /var/spdk/build-*-manifest.txt 00:02:53.940 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:53.940 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.940 ++ uname 00:02:53.940 + [[ Linux == \L\i\n\u\x ]] 00:02:53.940 + sudo dmesg -T 00:02:53.940 + sudo dmesg --clear 00:02:53.940 + dmesg_pid=5306 00:02:53.940 + [[ Fedora Linux == FreeBSD ]] 00:02:53.940 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.940 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.940 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:53.940 + sudo dmesg -Tw 00:02:53.940 + [[ -x /usr/src/fio-static/fio ]] 00:02:53.940 + export FIO_BIN=/usr/src/fio-static/fio 00:02:53.940 + FIO_BIN=/usr/src/fio-static/fio 00:02:53.940 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:53.940 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:53.940 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:53.940 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.940 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.940 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:53.940 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.940 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.940 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.940 Test configuration: 00:02:53.940 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.940 SPDK_TEST_NVME=1 00:02:53.940 SPDK_TEST_FTL=1 00:02:53.940 SPDK_TEST_ISAL=1 00:02:53.940 SPDK_RUN_ASAN=1 00:02:53.940 SPDK_RUN_UBSAN=1 00:02:53.940 SPDK_TEST_XNVME=1 00:02:53.940 SPDK_TEST_NVME_FDP=1 00:02:53.940 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:54.198 RUN_NIGHTLY=0 11:27:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:54.198 11:27:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:54.199 11:27:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:54.199 11:27:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:54.199 11:27:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.199 11:27:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.199 11:27:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.199 11:27:53 -- paths/export.sh@5 -- $ export PATH 00:02:54.199 11:27:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.199 11:27:53 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:54.199 11:27:53 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:54.199 11:27:53 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721906873.XXXXXX 00:02:54.199 11:27:53 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721906873.CQMeGs 00:02:54.199 11:27:53 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:54.199 11:27:53 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:54.199 11:27:53 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:54.199 11:27:53 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:54.199 11:27:53 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:54.199 11:27:53 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:54.199 11:27:53 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:54.199 11:27:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.199 11:27:53 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:54.199 11:27:53 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:54.199 11:27:53 -- pm/common@17 -- $ local monitor 00:02:54.199 11:27:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.199 11:27:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.199 11:27:53 -- pm/common@21 -- $ date +%s 00:02:54.199 11:27:53 -- pm/common@25 -- $ sleep 1 00:02:54.199 11:27:53 -- pm/common@21 -- $ date +%s 00:02:54.199 11:27:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721906873 00:02:54.199 11:27:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721906873 00:02:54.199 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721906873_collect-vmstat.pm.log 00:02:54.199 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721906873_collect-cpu-load.pm.log 00:02:55.134 11:27:54 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:55.134 11:27:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:55.134 11:27:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:55.134 11:27:54 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:55.134 11:27:54 -- spdk/autobuild.sh@16 -- $ date -u 00:02:55.134 Thu Jul 25 11:27:54 AM UTC 2024 00:02:55.134 11:27:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:55.134 v24.09-pre-321-g704257090 00:02:55.134 11:27:54 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:55.134 11:27:54 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:55.134 11:27:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:55.134 11:27:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:55.134 11:27:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.134 ************************************ 00:02:55.134 START TEST asan 00:02:55.134 ************************************ 00:02:55.134 using asan 00:02:55.134 11:27:54 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:55.134 00:02:55.134 real 0m0.000s 00:02:55.134 user 0m0.000s 00:02:55.134 sys 0m0.000s 00:02:55.134 11:27:54 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:55.134 ************************************ 00:02:55.134 END TEST asan 00:02:55.134 ************************************ 00:02:55.134 11:27:54 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:55.134 11:27:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:55.134 11:27:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:55.134 11:27:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:55.134 11:27:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:55.134 11:27:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.134 ************************************ 00:02:55.134 START TEST ubsan 00:02:55.134 ************************************ 00:02:55.134 using ubsan 00:02:55.134 11:27:54 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:55.134 00:02:55.134 real 0m0.000s 00:02:55.134 user 0m0.000s 00:02:55.134 sys 0m0.000s 00:02:55.134 11:27:54 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:55.134 11:27:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:55.134 ************************************ 00:02:55.134 END TEST ubsan 00:02:55.134 ************************************ 00:02:55.134 11:27:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:55.134 11:27:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:55.134 11:27:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:55.134 11:27:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:55.134 11:27:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:55.134 11:27:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:55.134 11:27:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:55.134 11:27:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:55.134 11:27:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:55.392 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:55.392 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:55.650 Using 'verbs' RDMA provider 00:03:12.038 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:24.229 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:24.229 Creating mk/config.mk...done. 00:03:24.229 Creating mk/cc.flags.mk...done. 00:03:24.229 Type 'make' to build. 00:03:24.229 11:28:22 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:24.229 11:28:22 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:24.229 11:28:22 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:24.229 11:28:22 -- common/autotest_common.sh@10 -- $ set +x 00:03:24.229 ************************************ 00:03:24.229 START TEST make 00:03:24.229 ************************************ 00:03:24.229 11:28:22 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:24.229 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:24.229 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:24.229 meson setup builddir \ 00:03:24.229 -Dwith-libaio=enabled \ 00:03:24.229 -Dwith-liburing=enabled \ 00:03:24.229 -Dwith-libvfn=disabled \ 00:03:24.229 -Dwith-spdk=false && \ 00:03:24.229 meson compile -C builddir && \ 00:03:24.229 cd -) 00:03:24.229 make[1]: Nothing to be done for 'all'. 00:03:26.771 The Meson build system 00:03:26.771 Version: 1.3.1 00:03:26.771 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:26.771 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:26.771 Build type: native build 00:03:26.771 Project name: xnvme 00:03:26.771 Project version: 0.7.3 00:03:26.771 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:26.771 C linker for the host machine: cc ld.bfd 2.39-16 00:03:26.771 Host machine cpu family: x86_64 00:03:26.771 Host machine cpu: x86_64 00:03:26.771 Message: host_machine.system: linux 00:03:26.771 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:26.771 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:26.771 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:26.771 Run-time dependency threads found: YES 00:03:26.771 Has header "setupapi.h" : NO 00:03:26.771 Has header "linux/blkzoned.h" : YES 00:03:26.771 Has header "linux/blkzoned.h" : YES (cached) 00:03:26.771 Has header "libaio.h" : YES 00:03:26.771 Library aio found: YES 00:03:26.771 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:26.771 Run-time dependency liburing found: YES 2.2 00:03:26.771 Dependency libvfn skipped: feature with-libvfn disabled 00:03:26.771 Run-time dependency appleframeworks found: NO (tried framework) 00:03:26.771 Run-time dependency appleframeworks found: NO (tried framework) 00:03:26.771 Configuring xnvme_config.h using configuration 00:03:26.771 Configuring xnvme.spec using configuration 00:03:26.771 Run-time dependency bash-completion found: YES 2.11 00:03:26.771 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:26.771 Program cp found: YES (/usr/bin/cp) 00:03:26.771 Has header "winsock2.h" : NO 00:03:26.771 Has header "dbghelp.h" : NO 00:03:26.771 Library rpcrt4 found: NO 00:03:26.771 Library rt found: YES 00:03:26.771 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:26.771 Found CMake: /usr/bin/cmake (3.27.7) 00:03:26.771 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:26.771 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:26.771 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:26.771 Build targets in project: 32 00:03:26.771 00:03:26.771 xnvme 0.7.3 00:03:26.771 00:03:26.771 User defined options 00:03:26.771 with-libaio : enabled 00:03:26.771 with-liburing: enabled 00:03:26.771 with-libvfn : disabled 00:03:26.771 with-spdk : false 00:03:26.771 00:03:26.771 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:27.383 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:27.383 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:27.383 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:27.643 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:27.643 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:27.643 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:27.643 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:27.643 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:27.643 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:27.643 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:27.643 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:27.643 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:27.643 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:27.643 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:27.643 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:27.643 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:27.643 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:27.643 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:27.643 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:27.901 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:27.901 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:27.901 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:27.901 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:27.901 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:27.901 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:27.901 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:27.901 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:27.901 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:27.901 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:27.902 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:27.902 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:27.902 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:27.902 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:27.902 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:27.902 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:27.902 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:27.902 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:27.902 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:27.902 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:27.902 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:27.902 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:27.902 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:27.902 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:28.159 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:28.159 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:28.159 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:28.159 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:28.159 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:28.159 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:28.159 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:28.159 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:28.159 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:28.159 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:28.159 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:28.159 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:28.159 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:28.159 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:28.159 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:28.159 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:28.159 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:28.159 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:28.160 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:28.418 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:28.418 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:28.418 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:28.418 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:28.418 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:28.418 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:28.418 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:28.418 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:28.418 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:28.418 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:28.418 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:28.418 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:28.676 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:28.676 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:28.676 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:28.676 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:28.676 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:28.676 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:28.676 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:28.676 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:28.676 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:28.676 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:28.676 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:28.934 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:28.934 [86/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:28.934 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:28.934 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:28.934 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:28.934 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:28.934 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:28.934 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:28.934 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:28.934 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:28.934 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:28.934 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:28.934 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:28.934 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:28.934 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:28.934 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:28.934 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:28.934 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:28.934 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:28.934 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:28.934 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:28.934 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:28.934 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:28.934 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:29.192 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:29.192 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:29.192 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:29.192 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:29.192 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:29.192 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:29.192 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:29.192 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:29.192 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:29.192 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:29.192 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:29.192 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:29.192 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:29.192 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:29.192 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:29.192 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:29.192 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:29.192 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:29.192 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:29.192 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:29.450 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:29.450 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:29.450 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:29.450 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:29.450 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:29.450 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:29.450 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:29.450 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:29.450 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:29.450 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:29.450 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:29.450 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:29.707 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:29.707 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:29.707 [143/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:29.707 [144/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:29.707 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:29.707 [146/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:29.707 [147/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:29.707 [148/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:29.964 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:29.964 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:29.964 [151/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:29.964 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:29.964 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:29.964 [154/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:29.964 [155/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:29.964 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:29.964 [157/203] Linking target lib/libxnvme.so 00:03:29.964 [158/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:29.964 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:29.964 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:30.222 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:30.222 [162/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:30.222 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:30.222 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:30.222 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:30.222 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:30.222 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:30.222 [168/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:30.222 [169/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:30.222 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:30.479 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:30.479 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:30.479 [173/203] Linking static target lib/libxnvme.a 00:03:30.479 [174/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:30.479 [175/203] Linking target tests/xnvme_tests_buf 00:03:30.479 [176/203] Linking target tests/xnvme_tests_async_intf 00:03:30.479 [177/203] Linking target tests/xnvme_tests_cli 00:03:30.479 [178/203] Linking target tests/xnvme_tests_enum 00:03:30.479 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:03:30.479 [180/203] Linking target tests/xnvme_tests_lblk 00:03:30.479 [181/203] Linking target tests/xnvme_tests_znd_append 00:03:30.479 [182/203] Linking target tests/xnvme_tests_scc 00:03:30.479 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:30.479 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:30.479 [185/203] Linking target tests/xnvme_tests_znd_state 00:03:30.479 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:30.479 [187/203] Linking target tests/xnvme_tests_ioworker 00:03:30.479 [188/203] Linking target tests/xnvme_tests_map 00:03:30.479 [189/203] Linking target tests/xnvme_tests_kvs 00:03:30.479 [190/203] Linking target tools/lblk 00:03:30.479 [191/203] Linking target tools/xdd 00:03:30.479 [192/203] Linking target tools/xnvme 00:03:30.737 [193/203] Linking target examples/xnvme_enum 00:03:30.737 [194/203] Linking target tools/xnvme_file 00:03:30.737 [195/203] Linking target tools/zoned 00:03:30.737 [196/203] Linking target tools/kvs 00:03:30.737 [197/203] Linking target examples/xnvme_dev 00:03:30.737 [198/203] Linking target examples/xnvme_single_sync 00:03:30.737 [199/203] Linking target examples/xnvme_hello 00:03:30.737 [200/203] Linking target examples/xnvme_io_async 00:03:30.737 [201/203] Linking target examples/xnvme_single_async 00:03:30.737 [202/203] Linking target examples/zoned_io_async 00:03:30.737 [203/203] Linking target examples/zoned_io_sync 00:03:30.737 INFO: autodetecting backend as ninja 00:03:30.737 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:30.737 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:40.698 The Meson build system 00:03:40.698 Version: 1.3.1 00:03:40.698 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:40.698 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:40.698 Build type: native build 00:03:40.698 Program cat found: YES (/usr/bin/cat) 00:03:40.698 Project name: DPDK 00:03:40.698 Project version: 24.03.0 00:03:40.698 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:40.698 C linker for the host machine: cc ld.bfd 2.39-16 00:03:40.698 Host machine cpu family: x86_64 00:03:40.698 Host machine cpu: x86_64 00:03:40.698 Message: ## Building in Developer Mode ## 00:03:40.698 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:40.698 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:40.698 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:40.698 Program python3 found: YES (/usr/bin/python3) 00:03:40.698 Program cat found: YES (/usr/bin/cat) 00:03:40.698 Compiler for C supports arguments -march=native: YES 00:03:40.698 Checking for size of "void *" : 8 00:03:40.698 Checking for size of "void *" : 8 (cached) 00:03:40.698 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:40.698 Library m found: YES 00:03:40.698 Library numa found: YES 00:03:40.698 Has header "numaif.h" : YES 00:03:40.698 Library fdt found: NO 00:03:40.698 Library execinfo found: NO 00:03:40.698 Has header "execinfo.h" : YES 00:03:40.698 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:40.698 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:40.698 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:40.698 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:40.698 Run-time dependency openssl found: YES 3.0.9 00:03:40.698 Run-time dependency libpcap found: YES 1.10.4 00:03:40.698 Has header "pcap.h" with dependency libpcap: YES 00:03:40.698 Compiler for C supports arguments -Wcast-qual: YES 00:03:40.699 Compiler for C supports arguments -Wdeprecated: YES 00:03:40.699 Compiler for C supports arguments -Wformat: YES 00:03:40.699 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:40.699 Compiler for C supports arguments -Wformat-security: NO 00:03:40.699 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:40.699 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:40.699 Compiler for C supports arguments -Wnested-externs: YES 00:03:40.699 Compiler for C supports arguments -Wold-style-definition: YES 00:03:40.699 Compiler for C supports arguments -Wpointer-arith: YES 00:03:40.699 Compiler for C supports arguments -Wsign-compare: YES 00:03:40.699 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:40.699 Compiler for C supports arguments -Wundef: YES 00:03:40.699 Compiler for C supports arguments -Wwrite-strings: YES 00:03:40.699 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:40.699 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:40.699 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:40.699 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:40.699 Program objdump found: YES (/usr/bin/objdump) 00:03:40.699 Compiler for C supports arguments -mavx512f: YES 00:03:40.699 Checking if "AVX512 checking" compiles: YES 00:03:40.699 Fetching value of define "__SSE4_2__" : 1 00:03:40.699 Fetching value of define "__AES__" : 1 00:03:40.699 Fetching value of define "__AVX__" : 1 00:03:40.699 Fetching value of define "__AVX2__" : 1 00:03:40.699 Fetching value of define "__AVX512BW__" : (undefined) 00:03:40.699 Fetching value of define "__AVX512CD__" : (undefined) 00:03:40.699 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:40.699 Fetching value of define "__AVX512F__" : (undefined) 00:03:40.699 Fetching value of define "__AVX512VL__" : (undefined) 00:03:40.699 Fetching value of define "__PCLMUL__" : 1 00:03:40.699 Fetching value of define "__RDRND__" : 1 00:03:40.699 Fetching value of define "__RDSEED__" : 1 00:03:40.699 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:40.699 Fetching value of define "__znver1__" : (undefined) 00:03:40.699 Fetching value of define "__znver2__" : (undefined) 00:03:40.699 Fetching value of define "__znver3__" : (undefined) 00:03:40.699 Fetching value of define "__znver4__" : (undefined) 00:03:40.699 Library asan found: YES 00:03:40.699 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:40.699 Message: lib/log: Defining dependency "log" 00:03:40.699 Message: lib/kvargs: Defining dependency "kvargs" 00:03:40.699 Message: lib/telemetry: Defining dependency "telemetry" 00:03:40.699 Library rt found: YES 00:03:40.699 Checking for function "getentropy" : NO 00:03:40.699 Message: lib/eal: Defining dependency "eal" 00:03:40.699 Message: lib/ring: Defining dependency "ring" 00:03:40.699 Message: lib/rcu: Defining dependency "rcu" 00:03:40.699 Message: lib/mempool: Defining dependency "mempool" 00:03:40.699 Message: lib/mbuf: Defining dependency "mbuf" 00:03:40.699 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:40.699 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:40.699 Compiler for C supports arguments -mpclmul: YES 00:03:40.699 Compiler for C supports arguments -maes: YES 00:03:40.699 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:40.699 Compiler for C supports arguments -mavx512bw: YES 00:03:40.699 Compiler for C supports arguments -mavx512dq: YES 00:03:40.699 Compiler for C supports arguments -mavx512vl: YES 00:03:40.699 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:40.699 Compiler for C supports arguments -mavx2: YES 00:03:40.699 Compiler for C supports arguments -mavx: YES 00:03:40.699 Message: lib/net: Defining dependency "net" 00:03:40.699 Message: lib/meter: Defining dependency "meter" 00:03:40.699 Message: lib/ethdev: Defining dependency "ethdev" 00:03:40.699 Message: lib/pci: Defining dependency "pci" 00:03:40.699 Message: lib/cmdline: Defining dependency "cmdline" 00:03:40.699 Message: lib/hash: Defining dependency "hash" 00:03:40.699 Message: lib/timer: Defining dependency "timer" 00:03:40.699 Message: lib/compressdev: Defining dependency "compressdev" 00:03:40.699 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:40.699 Message: lib/dmadev: Defining dependency "dmadev" 00:03:40.699 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:40.699 Message: lib/power: Defining dependency "power" 00:03:40.699 Message: lib/reorder: Defining dependency "reorder" 00:03:40.699 Message: lib/security: Defining dependency "security" 00:03:40.699 Has header "linux/userfaultfd.h" : YES 00:03:40.699 Has header "linux/vduse.h" : YES 00:03:40.699 Message: lib/vhost: Defining dependency "vhost" 00:03:40.699 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:40.699 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:40.699 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:40.699 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:40.699 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:40.699 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:40.699 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:40.699 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:40.699 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:40.699 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:40.699 Program doxygen found: YES (/usr/bin/doxygen) 00:03:40.699 Configuring doxy-api-html.conf using configuration 00:03:40.699 Configuring doxy-api-man.conf using configuration 00:03:40.699 Program mandb found: YES (/usr/bin/mandb) 00:03:40.699 Program sphinx-build found: NO 00:03:40.699 Configuring rte_build_config.h using configuration 00:03:40.699 Message: 00:03:40.699 ================= 00:03:40.699 Applications Enabled 00:03:40.699 ================= 00:03:40.699 00:03:40.699 apps: 00:03:40.699 00:03:40.699 00:03:40.699 Message: 00:03:40.699 ================= 00:03:40.699 Libraries Enabled 00:03:40.699 ================= 00:03:40.699 00:03:40.699 libs: 00:03:40.699 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:40.699 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:40.699 cryptodev, dmadev, power, reorder, security, vhost, 00:03:40.699 00:03:40.699 Message: 00:03:40.699 =============== 00:03:40.699 Drivers Enabled 00:03:40.699 =============== 00:03:40.699 00:03:40.699 common: 00:03:40.699 00:03:40.699 bus: 00:03:40.699 pci, vdev, 00:03:40.699 mempool: 00:03:40.699 ring, 00:03:40.699 dma: 00:03:40.699 00:03:40.699 net: 00:03:40.699 00:03:40.699 crypto: 00:03:40.699 00:03:40.699 compress: 00:03:40.699 00:03:40.699 vdpa: 00:03:40.699 00:03:40.699 00:03:40.699 Message: 00:03:40.699 ================= 00:03:40.699 Content Skipped 00:03:40.699 ================= 00:03:40.699 00:03:40.699 apps: 00:03:40.699 dumpcap: explicitly disabled via build config 00:03:40.699 graph: explicitly disabled via build config 00:03:40.699 pdump: explicitly disabled via build config 00:03:40.699 proc-info: explicitly disabled via build config 00:03:40.699 test-acl: explicitly disabled via build config 00:03:40.699 test-bbdev: explicitly disabled via build config 00:03:40.699 test-cmdline: explicitly disabled via build config 00:03:40.699 test-compress-perf: explicitly disabled via build config 00:03:40.699 test-crypto-perf: explicitly disabled via build config 00:03:40.699 test-dma-perf: explicitly disabled via build config 00:03:40.699 test-eventdev: explicitly disabled via build config 00:03:40.699 test-fib: explicitly disabled via build config 00:03:40.699 test-flow-perf: explicitly disabled via build config 00:03:40.699 test-gpudev: explicitly disabled via build config 00:03:40.699 test-mldev: explicitly disabled via build config 00:03:40.699 test-pipeline: explicitly disabled via build config 00:03:40.699 test-pmd: explicitly disabled via build config 00:03:40.699 test-regex: explicitly disabled via build config 00:03:40.699 test-sad: explicitly disabled via build config 00:03:40.699 test-security-perf: explicitly disabled via build config 00:03:40.699 00:03:40.699 libs: 00:03:40.699 argparse: explicitly disabled via build config 00:03:40.699 metrics: explicitly disabled via build config 00:03:40.699 acl: explicitly disabled via build config 00:03:40.699 bbdev: explicitly disabled via build config 00:03:40.699 bitratestats: explicitly disabled via build config 00:03:40.699 bpf: explicitly disabled via build config 00:03:40.699 cfgfile: explicitly disabled via build config 00:03:40.699 distributor: explicitly disabled via build config 00:03:40.699 efd: explicitly disabled via build config 00:03:40.699 eventdev: explicitly disabled via build config 00:03:40.699 dispatcher: explicitly disabled via build config 00:03:40.699 gpudev: explicitly disabled via build config 00:03:40.699 gro: explicitly disabled via build config 00:03:40.699 gso: explicitly disabled via build config 00:03:40.699 ip_frag: explicitly disabled via build config 00:03:40.699 jobstats: explicitly disabled via build config 00:03:40.699 latencystats: explicitly disabled via build config 00:03:40.699 lpm: explicitly disabled via build config 00:03:40.700 member: explicitly disabled via build config 00:03:40.700 pcapng: explicitly disabled via build config 00:03:40.700 rawdev: explicitly disabled via build config 00:03:40.700 regexdev: explicitly disabled via build config 00:03:40.700 mldev: explicitly disabled via build config 00:03:40.700 rib: explicitly disabled via build config 00:03:40.700 sched: explicitly disabled via build config 00:03:40.700 stack: explicitly disabled via build config 00:03:40.700 ipsec: explicitly disabled via build config 00:03:40.700 pdcp: explicitly disabled via build config 00:03:40.700 fib: explicitly disabled via build config 00:03:40.700 port: explicitly disabled via build config 00:03:40.700 pdump: explicitly disabled via build config 00:03:40.700 table: explicitly disabled via build config 00:03:40.700 pipeline: explicitly disabled via build config 00:03:40.700 graph: explicitly disabled via build config 00:03:40.700 node: explicitly disabled via build config 00:03:40.700 00:03:40.700 drivers: 00:03:40.700 common/cpt: not in enabled drivers build config 00:03:40.700 common/dpaax: not in enabled drivers build config 00:03:40.700 common/iavf: not in enabled drivers build config 00:03:40.700 common/idpf: not in enabled drivers build config 00:03:40.700 common/ionic: not in enabled drivers build config 00:03:40.700 common/mvep: not in enabled drivers build config 00:03:40.700 common/octeontx: not in enabled drivers build config 00:03:40.700 bus/auxiliary: not in enabled drivers build config 00:03:40.700 bus/cdx: not in enabled drivers build config 00:03:40.700 bus/dpaa: not in enabled drivers build config 00:03:40.700 bus/fslmc: not in enabled drivers build config 00:03:40.700 bus/ifpga: not in enabled drivers build config 00:03:40.700 bus/platform: not in enabled drivers build config 00:03:40.700 bus/uacce: not in enabled drivers build config 00:03:40.700 bus/vmbus: not in enabled drivers build config 00:03:40.700 common/cnxk: not in enabled drivers build config 00:03:40.700 common/mlx5: not in enabled drivers build config 00:03:40.700 common/nfp: not in enabled drivers build config 00:03:40.700 common/nitrox: not in enabled drivers build config 00:03:40.700 common/qat: not in enabled drivers build config 00:03:40.700 common/sfc_efx: not in enabled drivers build config 00:03:40.700 mempool/bucket: not in enabled drivers build config 00:03:40.700 mempool/cnxk: not in enabled drivers build config 00:03:40.700 mempool/dpaa: not in enabled drivers build config 00:03:40.700 mempool/dpaa2: not in enabled drivers build config 00:03:40.700 mempool/octeontx: not in enabled drivers build config 00:03:40.700 mempool/stack: not in enabled drivers build config 00:03:40.700 dma/cnxk: not in enabled drivers build config 00:03:40.700 dma/dpaa: not in enabled drivers build config 00:03:40.700 dma/dpaa2: not in enabled drivers build config 00:03:40.700 dma/hisilicon: not in enabled drivers build config 00:03:40.700 dma/idxd: not in enabled drivers build config 00:03:40.700 dma/ioat: not in enabled drivers build config 00:03:40.700 dma/skeleton: not in enabled drivers build config 00:03:40.700 net/af_packet: not in enabled drivers build config 00:03:40.700 net/af_xdp: not in enabled drivers build config 00:03:40.700 net/ark: not in enabled drivers build config 00:03:40.700 net/atlantic: not in enabled drivers build config 00:03:40.700 net/avp: not in enabled drivers build config 00:03:40.700 net/axgbe: not in enabled drivers build config 00:03:40.700 net/bnx2x: not in enabled drivers build config 00:03:40.700 net/bnxt: not in enabled drivers build config 00:03:40.700 net/bonding: not in enabled drivers build config 00:03:40.700 net/cnxk: not in enabled drivers build config 00:03:40.700 net/cpfl: not in enabled drivers build config 00:03:40.700 net/cxgbe: not in enabled drivers build config 00:03:40.700 net/dpaa: not in enabled drivers build config 00:03:40.700 net/dpaa2: not in enabled drivers build config 00:03:40.700 net/e1000: not in enabled drivers build config 00:03:40.700 net/ena: not in enabled drivers build config 00:03:40.700 net/enetc: not in enabled drivers build config 00:03:40.700 net/enetfec: not in enabled drivers build config 00:03:40.700 net/enic: not in enabled drivers build config 00:03:40.700 net/failsafe: not in enabled drivers build config 00:03:40.700 net/fm10k: not in enabled drivers build config 00:03:40.700 net/gve: not in enabled drivers build config 00:03:40.700 net/hinic: not in enabled drivers build config 00:03:40.700 net/hns3: not in enabled drivers build config 00:03:40.700 net/i40e: not in enabled drivers build config 00:03:40.700 net/iavf: not in enabled drivers build config 00:03:40.700 net/ice: not in enabled drivers build config 00:03:40.700 net/idpf: not in enabled drivers build config 00:03:40.700 net/igc: not in enabled drivers build config 00:03:40.700 net/ionic: not in enabled drivers build config 00:03:40.700 net/ipn3ke: not in enabled drivers build config 00:03:40.700 net/ixgbe: not in enabled drivers build config 00:03:40.700 net/mana: not in enabled drivers build config 00:03:40.700 net/memif: not in enabled drivers build config 00:03:40.700 net/mlx4: not in enabled drivers build config 00:03:40.700 net/mlx5: not in enabled drivers build config 00:03:40.700 net/mvneta: not in enabled drivers build config 00:03:40.700 net/mvpp2: not in enabled drivers build config 00:03:40.700 net/netvsc: not in enabled drivers build config 00:03:40.700 net/nfb: not in enabled drivers build config 00:03:40.700 net/nfp: not in enabled drivers build config 00:03:40.700 net/ngbe: not in enabled drivers build config 00:03:40.700 net/null: not in enabled drivers build config 00:03:40.700 net/octeontx: not in enabled drivers build config 00:03:40.700 net/octeon_ep: not in enabled drivers build config 00:03:40.700 net/pcap: not in enabled drivers build config 00:03:40.700 net/pfe: not in enabled drivers build config 00:03:40.700 net/qede: not in enabled drivers build config 00:03:40.700 net/ring: not in enabled drivers build config 00:03:40.700 net/sfc: not in enabled drivers build config 00:03:40.700 net/softnic: not in enabled drivers build config 00:03:40.700 net/tap: not in enabled drivers build config 00:03:40.700 net/thunderx: not in enabled drivers build config 00:03:40.700 net/txgbe: not in enabled drivers build config 00:03:40.700 net/vdev_netvsc: not in enabled drivers build config 00:03:40.700 net/vhost: not in enabled drivers build config 00:03:40.700 net/virtio: not in enabled drivers build config 00:03:40.700 net/vmxnet3: not in enabled drivers build config 00:03:40.700 raw/*: missing internal dependency, "rawdev" 00:03:40.700 crypto/armv8: not in enabled drivers build config 00:03:40.700 crypto/bcmfs: not in enabled drivers build config 00:03:40.700 crypto/caam_jr: not in enabled drivers build config 00:03:40.700 crypto/ccp: not in enabled drivers build config 00:03:40.700 crypto/cnxk: not in enabled drivers build config 00:03:40.700 crypto/dpaa_sec: not in enabled drivers build config 00:03:40.700 crypto/dpaa2_sec: not in enabled drivers build config 00:03:40.700 crypto/ipsec_mb: not in enabled drivers build config 00:03:40.700 crypto/mlx5: not in enabled drivers build config 00:03:40.700 crypto/mvsam: not in enabled drivers build config 00:03:40.700 crypto/nitrox: not in enabled drivers build config 00:03:40.700 crypto/null: not in enabled drivers build config 00:03:40.700 crypto/octeontx: not in enabled drivers build config 00:03:40.700 crypto/openssl: not in enabled drivers build config 00:03:40.700 crypto/scheduler: not in enabled drivers build config 00:03:40.700 crypto/uadk: not in enabled drivers build config 00:03:40.700 crypto/virtio: not in enabled drivers build config 00:03:40.700 compress/isal: not in enabled drivers build config 00:03:40.700 compress/mlx5: not in enabled drivers build config 00:03:40.700 compress/nitrox: not in enabled drivers build config 00:03:40.700 compress/octeontx: not in enabled drivers build config 00:03:40.700 compress/zlib: not in enabled drivers build config 00:03:40.700 regex/*: missing internal dependency, "regexdev" 00:03:40.700 ml/*: missing internal dependency, "mldev" 00:03:40.700 vdpa/ifc: not in enabled drivers build config 00:03:40.700 vdpa/mlx5: not in enabled drivers build config 00:03:40.700 vdpa/nfp: not in enabled drivers build config 00:03:40.700 vdpa/sfc: not in enabled drivers build config 00:03:40.700 event/*: missing internal dependency, "eventdev" 00:03:40.700 baseband/*: missing internal dependency, "bbdev" 00:03:40.700 gpu/*: missing internal dependency, "gpudev" 00:03:40.700 00:03:40.700 00:03:40.700 Build targets in project: 85 00:03:40.700 00:03:40.700 DPDK 24.03.0 00:03:40.700 00:03:40.700 User defined options 00:03:40.700 buildtype : debug 00:03:40.700 default_library : shared 00:03:40.700 libdir : lib 00:03:40.700 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:40.700 b_sanitize : address 00:03:40.700 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:40.700 c_link_args : 00:03:40.700 cpu_instruction_set: native 00:03:40.701 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:40.701 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:40.701 enable_docs : false 00:03:40.701 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:40.701 enable_kmods : false 00:03:40.701 max_lcores : 128 00:03:40.701 tests : false 00:03:40.701 00:03:40.701 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:40.701 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:40.701 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:40.701 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:40.701 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:40.701 [4/268] Linking static target lib/librte_kvargs.a 00:03:40.701 [5/268] Linking static target lib/librte_log.a 00:03:40.701 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:41.266 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.266 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:41.523 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:41.780 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:41.780 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:41.780 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:41.780 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:41.780 [14/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.037 [15/268] Linking target lib/librte_log.so.24.1 00:03:42.037 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:42.037 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:42.037 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:42.037 [19/268] Linking static target lib/librte_telemetry.a 00:03:42.295 [20/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:42.295 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:42.295 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:42.553 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:42.811 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:42.811 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:42.811 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:42.811 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:43.069 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:43.069 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:43.069 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:43.328 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.328 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:43.328 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:43.328 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:43.328 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:43.892 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:43.892 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:43.892 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:43.892 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:43.892 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:43.892 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:43.892 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:44.150 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:44.150 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:44.407 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:44.407 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:44.695 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:44.695 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:44.696 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:44.696 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:44.953 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:45.211 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:45.211 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:45.212 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:45.469 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:45.469 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:45.726 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:45.726 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:45.983 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:45.983 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:45.983 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:45.983 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:46.240 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:46.240 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:46.498 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:46.498 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:46.755 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:47.012 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:47.012 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:47.269 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:47.269 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:47.269 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:47.269 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:47.269 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:47.526 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:47.526 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:47.785 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:47.785 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:47.785 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:47.785 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:48.350 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:48.350 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:48.607 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:48.607 [84/268] Linking static target lib/librte_ring.a 00:03:48.607 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:48.865 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:48.865 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:49.122 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:49.122 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:49.122 [90/268] Linking static target lib/librte_eal.a 00:03:49.123 [91/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.380 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:49.638 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:49.638 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:49.638 [95/268] Linking static target lib/librte_mempool.a 00:03:49.638 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:49.896 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:49.896 [98/268] Linking static target lib/librte_rcu.a 00:03:49.896 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:49.896 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:49.896 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:49.896 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:50.153 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:50.153 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:50.411 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.669 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:50.669 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:50.669 [108/268] Linking static target lib/librte_meter.a 00:03:50.669 [109/268] Linking static target lib/librte_mbuf.a 00:03:50.669 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:50.669 [111/268] Linking static target lib/librte_net.a 00:03:50.927 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:50.927 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.185 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:51.185 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:51.185 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.185 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.474 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:51.474 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:51.748 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.748 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:52.311 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:52.311 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:52.569 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:52.569 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:52.569 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:52.569 [127/268] Linking static target lib/librte_pci.a 00:03:52.569 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:52.826 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:52.826 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:52.826 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:52.826 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:52.826 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:53.084 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:53.084 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:53.084 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:53.084 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.084 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:53.084 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:53.084 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:53.084 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:53.084 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:53.341 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:53.341 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:53.341 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:53.599 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:53.857 [147/268] Linking static target lib/librte_cmdline.a 00:03:53.857 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:53.857 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:54.115 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:54.115 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:54.373 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:54.373 [153/268] Linking static target lib/librte_timer.a 00:03:54.373 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:54.691 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:54.691 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:54.691 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:54.948 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:54.948 [159/268] Linking static target lib/librte_compressdev.a 00:03:54.948 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:54.948 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.206 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:55.206 [163/268] Linking static target lib/librte_ethdev.a 00:03:55.465 [164/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.724 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:55.724 [166/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:55.724 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:55.724 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:55.724 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:55.724 [170/268] Linking static target lib/librte_dmadev.a 00:03:55.983 [171/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.983 [172/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:55.983 [173/268] Linking static target lib/librte_hash.a 00:03:55.983 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:56.549 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:56.549 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:56.806 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:56.806 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:56.806 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:56.806 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.065 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:57.629 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.629 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:57.629 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:57.629 [185/268] Linking static target lib/librte_reorder.a 00:03:57.629 [186/268] Linking static target lib/librte_power.a 00:03:57.629 [187/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:57.629 [188/268] Linking static target lib/librte_cryptodev.a 00:03:57.629 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:57.886 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:57.886 [191/268] Linking static target lib/librte_security.a 00:03:57.886 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:57.886 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:58.143 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.705 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.705 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.705 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:58.963 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:58.963 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:59.220 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:59.221 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:59.478 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:59.478 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:59.478 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:59.478 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:59.735 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.735 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:59.993 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:59.993 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:59.993 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:59.993 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:00.253 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:00.253 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:00.253 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:00.253 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:00.253 [216/268] Linking static target drivers/librte_bus_vdev.a 00:04:00.253 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:00.253 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:00.253 [219/268] Linking static target drivers/librte_bus_pci.a 00:04:00.253 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:00.253 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:00.511 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:00.511 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.511 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:00.511 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:00.511 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:00.768 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.336 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.642 [229/268] Linking target lib/librte_eal.so.24.1 00:04:01.642 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:01.900 [231/268] Linking target lib/librte_pci.so.24.1 00:04:01.900 [232/268] Linking target lib/librte_timer.so.24.1 00:04:01.900 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:01.900 [234/268] Linking target lib/librte_dmadev.so.24.1 00:04:01.900 [235/268] Linking target lib/librte_ring.so.24.1 00:04:01.900 [236/268] Linking target lib/librte_meter.so.24.1 00:04:01.900 [237/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:01.900 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:01.900 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:01.900 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:02.157 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:02.157 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:02.157 [243/268] Linking target lib/librte_rcu.so.24.1 00:04:02.157 [244/268] Linking target lib/librte_mempool.so.24.1 00:04:02.413 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:02.413 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:02.413 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:02.413 [248/268] Linking target lib/librte_mbuf.so.24.1 00:04:02.413 [249/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:02.671 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:02.671 [251/268] Linking target lib/librte_reorder.so.24.1 00:04:02.671 [252/268] Linking target lib/librte_net.so.24.1 00:04:02.671 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:02.671 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:04:02.671 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:02.928 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:02.928 [257/268] Linking target lib/librte_cmdline.so.24.1 00:04:02.928 [258/268] Linking target lib/librte_hash.so.24.1 00:04:02.928 [259/268] Linking target lib/librte_security.so.24.1 00:04:02.928 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:03.187 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.187 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:03.445 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:03.445 [264/268] Linking target lib/librte_power.so.24.1 00:04:07.631 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:07.631 [266/268] Linking static target lib/librte_vhost.a 00:04:09.003 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.004 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:09.004 INFO: autodetecting backend as ninja 00:04:09.004 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:10.377 CC lib/ut/ut.o 00:04:10.377 CC lib/log/log.o 00:04:10.377 CC lib/log/log_flags.o 00:04:10.377 CC lib/log/log_deprecated.o 00:04:10.377 CC lib/ut_mock/mock.o 00:04:10.377 LIB libspdk_ut.a 00:04:10.634 SO libspdk_ut.so.2.0 00:04:10.634 LIB libspdk_ut_mock.a 00:04:10.634 LIB libspdk_log.a 00:04:10.634 SO libspdk_ut_mock.so.6.0 00:04:10.634 SO libspdk_log.so.7.0 00:04:10.634 SYMLINK libspdk_ut.so 00:04:10.634 SYMLINK libspdk_ut_mock.so 00:04:10.634 SYMLINK libspdk_log.so 00:04:10.892 CXX lib/trace_parser/trace.o 00:04:10.892 CC lib/dma/dma.o 00:04:10.892 CC lib/ioat/ioat.o 00:04:10.892 CC lib/util/base64.o 00:04:10.892 CC lib/util/bit_array.o 00:04:10.892 CC lib/util/crc16.o 00:04:10.892 CC lib/util/cpuset.o 00:04:10.892 CC lib/util/crc32.o 00:04:10.892 CC lib/util/crc32c.o 00:04:10.892 CC lib/vfio_user/host/vfio_user_pci.o 00:04:10.892 CC lib/util/crc32_ieee.o 00:04:10.892 CC lib/vfio_user/host/vfio_user.o 00:04:11.150 CC lib/util/crc64.o 00:04:11.150 LIB libspdk_dma.a 00:04:11.150 CC lib/util/dif.o 00:04:11.150 SO libspdk_dma.so.4.0 00:04:11.150 CC lib/util/fd.o 00:04:11.150 CC lib/util/fd_group.o 00:04:11.150 SYMLINK libspdk_dma.so 00:04:11.150 CC lib/util/file.o 00:04:11.150 CC lib/util/hexlify.o 00:04:11.150 LIB libspdk_ioat.a 00:04:11.150 SO libspdk_ioat.so.7.0 00:04:11.150 CC lib/util/iov.o 00:04:11.409 CC lib/util/math.o 00:04:11.409 CC lib/util/net.o 00:04:11.409 SYMLINK libspdk_ioat.so 00:04:11.409 CC lib/util/pipe.o 00:04:11.409 LIB libspdk_vfio_user.a 00:04:11.409 CC lib/util/strerror_tls.o 00:04:11.409 CC lib/util/string.o 00:04:11.409 SO libspdk_vfio_user.so.5.0 00:04:11.409 SYMLINK libspdk_vfio_user.so 00:04:11.409 CC lib/util/uuid.o 00:04:11.409 CC lib/util/xor.o 00:04:11.409 CC lib/util/zipf.o 00:04:11.975 LIB libspdk_util.a 00:04:12.233 SO libspdk_util.so.10.0 00:04:12.233 LIB libspdk_trace_parser.a 00:04:12.233 SYMLINK libspdk_util.so 00:04:12.233 SO libspdk_trace_parser.so.5.0 00:04:12.491 SYMLINK libspdk_trace_parser.so 00:04:12.491 CC lib/conf/conf.o 00:04:12.491 CC lib/rdma_utils/rdma_utils.o 00:04:12.491 CC lib/vmd/vmd.o 00:04:12.491 CC lib/vmd/led.o 00:04:12.491 CC lib/rdma_provider/common.o 00:04:12.491 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:12.491 CC lib/env_dpdk/env.o 00:04:12.491 CC lib/env_dpdk/memory.o 00:04:12.491 CC lib/idxd/idxd.o 00:04:12.491 CC lib/json/json_parse.o 00:04:12.748 CC lib/json/json_util.o 00:04:12.748 CC lib/json/json_write.o 00:04:12.748 LIB libspdk_rdma_provider.a 00:04:12.748 LIB libspdk_conf.a 00:04:12.748 SO libspdk_rdma_provider.so.6.0 00:04:12.748 SO libspdk_conf.so.6.0 00:04:13.006 CC lib/idxd/idxd_user.o 00:04:13.006 SYMLINK libspdk_rdma_provider.so 00:04:13.006 CC lib/idxd/idxd_kernel.o 00:04:13.006 LIB libspdk_rdma_utils.a 00:04:13.006 SYMLINK libspdk_conf.so 00:04:13.006 CC lib/env_dpdk/pci.o 00:04:13.006 SO libspdk_rdma_utils.so.1.0 00:04:13.006 SYMLINK libspdk_rdma_utils.so 00:04:13.006 CC lib/env_dpdk/init.o 00:04:13.006 CC lib/env_dpdk/threads.o 00:04:13.006 CC lib/env_dpdk/pci_ioat.o 00:04:13.264 LIB libspdk_json.a 00:04:13.264 SO libspdk_json.so.6.0 00:04:13.264 CC lib/env_dpdk/pci_virtio.o 00:04:13.264 CC lib/env_dpdk/pci_vmd.o 00:04:13.264 CC lib/env_dpdk/pci_idxd.o 00:04:13.264 SYMLINK libspdk_json.so 00:04:13.264 CC lib/env_dpdk/pci_event.o 00:04:13.264 LIB libspdk_idxd.a 00:04:13.264 CC lib/env_dpdk/sigbus_handler.o 00:04:13.522 CC lib/env_dpdk/pci_dpdk.o 00:04:13.522 SO libspdk_idxd.so.12.0 00:04:13.522 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:13.522 LIB libspdk_vmd.a 00:04:13.522 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:13.522 SYMLINK libspdk_idxd.so 00:04:13.522 SO libspdk_vmd.so.6.0 00:04:13.522 SYMLINK libspdk_vmd.so 00:04:13.780 CC lib/jsonrpc/jsonrpc_server.o 00:04:13.780 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:13.780 CC lib/jsonrpc/jsonrpc_client.o 00:04:13.780 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:14.038 LIB libspdk_jsonrpc.a 00:04:14.038 SO libspdk_jsonrpc.so.6.0 00:04:14.295 SYMLINK libspdk_jsonrpc.so 00:04:14.552 CC lib/rpc/rpc.o 00:04:14.809 LIB libspdk_rpc.a 00:04:14.809 SO libspdk_rpc.so.6.0 00:04:14.809 LIB libspdk_env_dpdk.a 00:04:14.809 SYMLINK libspdk_rpc.so 00:04:14.809 SO libspdk_env_dpdk.so.15.0 00:04:15.066 CC lib/trace/trace.o 00:04:15.066 CC lib/notify/notify.o 00:04:15.066 CC lib/trace/trace_flags.o 00:04:15.066 CC lib/trace/trace_rpc.o 00:04:15.066 CC lib/notify/notify_rpc.o 00:04:15.066 CC lib/keyring/keyring.o 00:04:15.066 CC lib/keyring/keyring_rpc.o 00:04:15.066 SYMLINK libspdk_env_dpdk.so 00:04:15.323 LIB libspdk_notify.a 00:04:15.323 SO libspdk_notify.so.6.0 00:04:15.323 LIB libspdk_keyring.a 00:04:15.323 SYMLINK libspdk_notify.so 00:04:15.323 SO libspdk_keyring.so.1.0 00:04:15.323 LIB libspdk_trace.a 00:04:15.581 SO libspdk_trace.so.10.0 00:04:15.581 SYMLINK libspdk_keyring.so 00:04:15.581 SYMLINK libspdk_trace.so 00:04:15.839 CC lib/sock/sock_rpc.o 00:04:15.839 CC lib/sock/sock.o 00:04:15.839 CC lib/thread/thread.o 00:04:15.839 CC lib/thread/iobuf.o 00:04:16.403 LIB libspdk_sock.a 00:04:16.403 SO libspdk_sock.so.10.0 00:04:16.667 SYMLINK libspdk_sock.so 00:04:16.924 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:16.924 CC lib/nvme/nvme_ctrlr.o 00:04:16.924 CC lib/nvme/nvme_fabric.o 00:04:16.924 CC lib/nvme/nvme_ns_cmd.o 00:04:16.924 CC lib/nvme/nvme_ns.o 00:04:16.924 CC lib/nvme/nvme_pcie_common.o 00:04:16.924 CC lib/nvme/nvme_pcie.o 00:04:16.924 CC lib/nvme/nvme_qpair.o 00:04:16.924 CC lib/nvme/nvme.o 00:04:17.858 CC lib/nvme/nvme_quirks.o 00:04:17.858 CC lib/nvme/nvme_transport.o 00:04:17.858 CC lib/nvme/nvme_discovery.o 00:04:17.858 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:17.858 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:17.858 LIB libspdk_thread.a 00:04:18.116 SO libspdk_thread.so.10.1 00:04:18.116 CC lib/nvme/nvme_tcp.o 00:04:18.116 CC lib/nvme/nvme_opal.o 00:04:18.116 SYMLINK libspdk_thread.so 00:04:18.374 CC lib/accel/accel.o 00:04:18.632 CC lib/nvme/nvme_io_msg.o 00:04:18.632 CC lib/nvme/nvme_poll_group.o 00:04:18.632 CC lib/nvme/nvme_zns.o 00:04:18.632 CC lib/nvme/nvme_stubs.o 00:04:18.890 CC lib/nvme/nvme_auth.o 00:04:18.890 CC lib/nvme/nvme_cuse.o 00:04:18.890 CC lib/nvme/nvme_rdma.o 00:04:19.149 CC lib/accel/accel_rpc.o 00:04:19.407 CC lib/blob/blobstore.o 00:04:19.407 CC lib/init/json_config.o 00:04:19.407 CC lib/virtio/virtio.o 00:04:19.407 CC lib/accel/accel_sw.o 00:04:19.665 CC lib/init/subsystem.o 00:04:19.923 CC lib/virtio/virtio_vhost_user.o 00:04:19.923 CC lib/init/subsystem_rpc.o 00:04:19.923 CC lib/init/rpc.o 00:04:19.923 CC lib/blob/request.o 00:04:19.923 CC lib/blob/zeroes.o 00:04:20.180 CC lib/blob/blob_bs_dev.o 00:04:20.180 CC lib/virtio/virtio_vfio_user.o 00:04:20.180 LIB libspdk_init.a 00:04:20.180 CC lib/virtio/virtio_pci.o 00:04:20.180 SO libspdk_init.so.5.0 00:04:20.438 SYMLINK libspdk_init.so 00:04:20.696 CC lib/event/app.o 00:04:20.696 CC lib/event/reactor.o 00:04:20.696 CC lib/event/log_rpc.o 00:04:20.696 CC lib/event/scheduler_static.o 00:04:20.696 CC lib/event/app_rpc.o 00:04:20.696 LIB libspdk_virtio.a 00:04:20.696 SO libspdk_virtio.so.7.0 00:04:20.696 LIB libspdk_accel.a 00:04:20.696 SYMLINK libspdk_virtio.so 00:04:20.696 SO libspdk_accel.so.16.0 00:04:20.954 SYMLINK libspdk_accel.so 00:04:20.954 LIB libspdk_nvme.a 00:04:21.212 CC lib/bdev/bdev_zone.o 00:04:21.212 CC lib/bdev/bdev.o 00:04:21.212 CC lib/bdev/bdev_rpc.o 00:04:21.212 CC lib/bdev/part.o 00:04:21.212 CC lib/bdev/scsi_nvme.o 00:04:21.212 SO libspdk_nvme.so.13.1 00:04:21.470 LIB libspdk_event.a 00:04:21.470 SO libspdk_event.so.14.0 00:04:21.470 SYMLINK libspdk_event.so 00:04:21.727 SYMLINK libspdk_nvme.so 00:04:24.317 LIB libspdk_blob.a 00:04:24.317 SO libspdk_blob.so.11.0 00:04:24.317 SYMLINK libspdk_blob.so 00:04:24.575 CC lib/blobfs/blobfs.o 00:04:24.575 CC lib/lvol/lvol.o 00:04:24.575 CC lib/blobfs/tree.o 00:04:25.139 LIB libspdk_bdev.a 00:04:25.139 SO libspdk_bdev.so.16.0 00:04:25.139 SYMLINK libspdk_bdev.so 00:04:25.396 CC lib/nbd/nbd_rpc.o 00:04:25.396 CC lib/nbd/nbd.o 00:04:25.396 CC lib/ublk/ublk.o 00:04:25.396 CC lib/ublk/ublk_rpc.o 00:04:25.396 CC lib/nvmf/ctrlr.o 00:04:25.396 CC lib/nvmf/ctrlr_discovery.o 00:04:25.396 CC lib/scsi/dev.o 00:04:25.396 CC lib/ftl/ftl_core.o 00:04:25.653 CC lib/scsi/lun.o 00:04:25.653 LIB libspdk_blobfs.a 00:04:25.653 SO libspdk_blobfs.so.10.0 00:04:25.909 CC lib/scsi/port.o 00:04:25.909 SYMLINK libspdk_blobfs.so 00:04:25.909 CC lib/scsi/scsi.o 00:04:25.909 LIB libspdk_lvol.a 00:04:25.909 SO libspdk_lvol.so.10.0 00:04:25.909 CC lib/scsi/scsi_bdev.o 00:04:25.909 LIB libspdk_nbd.a 00:04:25.909 SYMLINK libspdk_lvol.so 00:04:25.909 CC lib/nvmf/ctrlr_bdev.o 00:04:25.909 SO libspdk_nbd.so.7.0 00:04:25.909 CC lib/ftl/ftl_init.o 00:04:26.166 CC lib/nvmf/subsystem.o 00:04:26.166 CC lib/nvmf/nvmf.o 00:04:26.166 CC lib/nvmf/nvmf_rpc.o 00:04:26.166 CC lib/ftl/ftl_layout.o 00:04:26.166 SYMLINK libspdk_nbd.so 00:04:26.166 CC lib/ftl/ftl_debug.o 00:04:26.166 CC lib/ftl/ftl_io.o 00:04:26.476 LIB libspdk_ublk.a 00:04:26.476 SO libspdk_ublk.so.3.0 00:04:26.476 CC lib/ftl/ftl_sb.o 00:04:26.476 SYMLINK libspdk_ublk.so 00:04:26.476 CC lib/scsi/scsi_pr.o 00:04:26.476 CC lib/scsi/scsi_rpc.o 00:04:26.476 CC lib/nvmf/transport.o 00:04:26.476 CC lib/scsi/task.o 00:04:26.733 CC lib/ftl/ftl_l2p.o 00:04:26.733 CC lib/nvmf/tcp.o 00:04:26.733 CC lib/nvmf/stubs.o 00:04:26.989 CC lib/nvmf/mdns_server.o 00:04:26.989 CC lib/ftl/ftl_l2p_flat.o 00:04:26.989 LIB libspdk_scsi.a 00:04:26.989 SO libspdk_scsi.so.9.0 00:04:27.247 SYMLINK libspdk_scsi.so 00:04:27.247 CC lib/ftl/ftl_nv_cache.o 00:04:27.247 CC lib/nvmf/rdma.o 00:04:27.247 CC lib/nvmf/auth.o 00:04:27.504 CC lib/iscsi/conn.o 00:04:27.504 CC lib/iscsi/init_grp.o 00:04:27.504 CC lib/ftl/ftl_band.o 00:04:27.504 CC lib/vhost/vhost.o 00:04:27.762 CC lib/ftl/ftl_band_ops.o 00:04:27.762 CC lib/iscsi/iscsi.o 00:04:28.019 CC lib/vhost/vhost_rpc.o 00:04:28.019 CC lib/ftl/ftl_writer.o 00:04:28.019 CC lib/ftl/ftl_rq.o 00:04:28.277 CC lib/iscsi/md5.o 00:04:28.277 CC lib/vhost/vhost_scsi.o 00:04:28.277 CC lib/vhost/vhost_blk.o 00:04:28.277 CC lib/ftl/ftl_reloc.o 00:04:28.277 CC lib/iscsi/param.o 00:04:28.535 CC lib/ftl/ftl_l2p_cache.o 00:04:28.535 CC lib/ftl/ftl_p2l.o 00:04:28.793 CC lib/vhost/rte_vhost_user.o 00:04:28.793 CC lib/ftl/mngt/ftl_mngt.o 00:04:28.793 CC lib/iscsi/portal_grp.o 00:04:29.051 CC lib/iscsi/tgt_node.o 00:04:29.051 CC lib/iscsi/iscsi_subsystem.o 00:04:29.051 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:29.051 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:29.309 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:29.309 CC lib/iscsi/iscsi_rpc.o 00:04:29.309 CC lib/iscsi/task.o 00:04:29.309 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:29.309 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:29.566 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:29.566 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:29.566 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:29.566 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:29.874 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:29.874 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:29.874 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:29.874 CC lib/ftl/utils/ftl_conf.o 00:04:29.874 CC lib/ftl/utils/ftl_md.o 00:04:29.874 LIB libspdk_iscsi.a 00:04:29.874 CC lib/ftl/utils/ftl_mempool.o 00:04:29.874 SO libspdk_iscsi.so.8.0 00:04:30.158 CC lib/ftl/utils/ftl_bitmap.o 00:04:30.158 CC lib/ftl/utils/ftl_property.o 00:04:30.158 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:30.158 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:30.158 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:30.158 LIB libspdk_vhost.a 00:04:30.158 SYMLINK libspdk_iscsi.so 00:04:30.158 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:30.158 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:30.158 LIB libspdk_nvmf.a 00:04:30.158 SO libspdk_vhost.so.8.0 00:04:30.424 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:30.425 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:30.425 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:30.425 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:30.425 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:30.425 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:30.425 SYMLINK libspdk_vhost.so 00:04:30.425 CC lib/ftl/base/ftl_base_dev.o 00:04:30.425 CC lib/ftl/base/ftl_base_bdev.o 00:04:30.425 SO libspdk_nvmf.so.19.0 00:04:30.425 CC lib/ftl/ftl_trace.o 00:04:30.691 LIB libspdk_ftl.a 00:04:30.691 SYMLINK libspdk_nvmf.so 00:04:30.949 SO libspdk_ftl.so.9.0 00:04:31.514 SYMLINK libspdk_ftl.so 00:04:31.771 CC module/env_dpdk/env_dpdk_rpc.o 00:04:32.029 CC module/sock/posix/posix.o 00:04:32.029 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:32.029 CC module/keyring/file/keyring.o 00:04:32.029 CC module/scheduler/gscheduler/gscheduler.o 00:04:32.029 CC module/keyring/linux/keyring.o 00:04:32.029 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:32.029 CC module/accel/ioat/accel_ioat.o 00:04:32.029 CC module/accel/error/accel_error.o 00:04:32.029 CC module/blob/bdev/blob_bdev.o 00:04:32.029 LIB libspdk_env_dpdk_rpc.a 00:04:32.029 SO libspdk_env_dpdk_rpc.so.6.0 00:04:32.029 CC module/keyring/file/keyring_rpc.o 00:04:32.029 CC module/keyring/linux/keyring_rpc.o 00:04:32.029 LIB libspdk_scheduler_dpdk_governor.a 00:04:32.029 SYMLINK libspdk_env_dpdk_rpc.so 00:04:32.029 CC module/accel/ioat/accel_ioat_rpc.o 00:04:32.335 LIB libspdk_scheduler_gscheduler.a 00:04:32.335 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:32.335 CC module/accel/error/accel_error_rpc.o 00:04:32.335 LIB libspdk_scheduler_dynamic.a 00:04:32.335 SO libspdk_scheduler_gscheduler.so.4.0 00:04:32.335 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:32.335 SO libspdk_scheduler_dynamic.so.4.0 00:04:32.335 LIB libspdk_keyring_linux.a 00:04:32.335 LIB libspdk_keyring_file.a 00:04:32.335 LIB libspdk_blob_bdev.a 00:04:32.335 LIB libspdk_accel_ioat.a 00:04:32.335 SO libspdk_keyring_linux.so.1.0 00:04:32.335 SO libspdk_keyring_file.so.1.0 00:04:32.335 SYMLINK libspdk_scheduler_gscheduler.so 00:04:32.335 SO libspdk_blob_bdev.so.11.0 00:04:32.335 SYMLINK libspdk_scheduler_dynamic.so 00:04:32.335 SO libspdk_accel_ioat.so.6.0 00:04:32.335 SYMLINK libspdk_keyring_linux.so 00:04:32.335 LIB libspdk_accel_error.a 00:04:32.335 SYMLINK libspdk_keyring_file.so 00:04:32.335 SYMLINK libspdk_blob_bdev.so 00:04:32.335 SO libspdk_accel_error.so.2.0 00:04:32.335 CC module/accel/dsa/accel_dsa.o 00:04:32.335 CC module/accel/dsa/accel_dsa_rpc.o 00:04:32.335 SYMLINK libspdk_accel_ioat.so 00:04:32.593 CC module/accel/iaa/accel_iaa.o 00:04:32.593 CC module/accel/iaa/accel_iaa_rpc.o 00:04:32.593 SYMLINK libspdk_accel_error.so 00:04:32.593 CC module/bdev/gpt/gpt.o 00:04:32.593 CC module/bdev/lvol/vbdev_lvol.o 00:04:32.593 CC module/blobfs/bdev/blobfs_bdev.o 00:04:32.593 CC module/bdev/delay/vbdev_delay.o 00:04:32.593 CC module/bdev/error/vbdev_error.o 00:04:32.593 LIB libspdk_accel_iaa.a 00:04:32.851 LIB libspdk_accel_dsa.a 00:04:32.851 SO libspdk_accel_iaa.so.3.0 00:04:32.851 CC module/bdev/malloc/bdev_malloc.o 00:04:32.851 SO libspdk_accel_dsa.so.5.0 00:04:32.851 SYMLINK libspdk_accel_iaa.so 00:04:32.851 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:32.851 CC module/bdev/null/bdev_null.o 00:04:32.851 SYMLINK libspdk_accel_dsa.so 00:04:32.851 CC module/bdev/null/bdev_null_rpc.o 00:04:32.851 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:32.851 CC module/bdev/gpt/vbdev_gpt.o 00:04:32.851 LIB libspdk_sock_posix.a 00:04:33.110 SO libspdk_sock_posix.so.6.0 00:04:33.110 CC module/bdev/error/vbdev_error_rpc.o 00:04:33.110 SYMLINK libspdk_sock_posix.so 00:04:33.110 LIB libspdk_blobfs_bdev.a 00:04:33.110 SO libspdk_blobfs_bdev.so.6.0 00:04:33.110 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:33.110 LIB libspdk_bdev_null.a 00:04:33.110 LIB libspdk_bdev_error.a 00:04:33.110 SYMLINK libspdk_blobfs_bdev.so 00:04:33.110 LIB libspdk_bdev_gpt.a 00:04:33.110 SO libspdk_bdev_null.so.6.0 00:04:33.110 CC module/bdev/nvme/bdev_nvme.o 00:04:33.368 LIB libspdk_bdev_malloc.a 00:04:33.368 SO libspdk_bdev_error.so.6.0 00:04:33.368 SO libspdk_bdev_gpt.so.6.0 00:04:33.368 CC module/bdev/passthru/vbdev_passthru.o 00:04:33.368 SO libspdk_bdev_malloc.so.6.0 00:04:33.368 CC module/bdev/raid/bdev_raid.o 00:04:33.368 SYMLINK libspdk_bdev_null.so 00:04:33.368 SYMLINK libspdk_bdev_error.so 00:04:33.368 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:33.368 SYMLINK libspdk_bdev_gpt.so 00:04:33.368 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:33.368 SYMLINK libspdk_bdev_malloc.so 00:04:33.368 CC module/bdev/nvme/nvme_rpc.o 00:04:33.368 LIB libspdk_bdev_delay.a 00:04:33.368 CC module/bdev/nvme/bdev_mdns_client.o 00:04:33.368 CC module/bdev/split/vbdev_split.o 00:04:33.368 SO libspdk_bdev_delay.so.6.0 00:04:33.368 SYMLINK libspdk_bdev_delay.so 00:04:33.629 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:33.629 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:33.629 CC module/bdev/xnvme/bdev_xnvme.o 00:04:33.629 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:33.629 CC module/bdev/split/vbdev_split_rpc.o 00:04:33.629 CC module/bdev/aio/bdev_aio.o 00:04:33.896 LIB libspdk_bdev_lvol.a 00:04:33.896 SO libspdk_bdev_lvol.so.6.0 00:04:33.896 LIB libspdk_bdev_passthru.a 00:04:33.896 LIB libspdk_bdev_split.a 00:04:33.896 SO libspdk_bdev_passthru.so.6.0 00:04:33.896 SO libspdk_bdev_split.so.6.0 00:04:33.896 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:33.896 SYMLINK libspdk_bdev_lvol.so 00:04:33.896 CC module/bdev/aio/bdev_aio_rpc.o 00:04:33.896 LIB libspdk_bdev_xnvme.a 00:04:33.896 SYMLINK libspdk_bdev_passthru.so 00:04:33.896 SYMLINK libspdk_bdev_split.so 00:04:34.154 SO libspdk_bdev_xnvme.so.3.0 00:04:34.154 LIB libspdk_bdev_zone_block.a 00:04:34.154 CC module/bdev/ftl/bdev_ftl.o 00:04:34.154 SYMLINK libspdk_bdev_xnvme.so 00:04:34.154 CC module/bdev/raid/bdev_raid_rpc.o 00:04:34.154 CC module/bdev/nvme/vbdev_opal.o 00:04:34.154 LIB libspdk_bdev_aio.a 00:04:34.154 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:34.154 SO libspdk_bdev_zone_block.so.6.0 00:04:34.154 SO libspdk_bdev_aio.so.6.0 00:04:34.154 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:34.154 CC module/bdev/iscsi/bdev_iscsi.o 00:04:34.411 SYMLINK libspdk_bdev_zone_block.so 00:04:34.411 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:34.411 SYMLINK libspdk_bdev_aio.so 00:04:34.411 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:34.411 CC module/bdev/raid/bdev_raid_sb.o 00:04:34.411 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:34.411 CC module/bdev/raid/raid0.o 00:04:34.411 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:34.684 LIB libspdk_bdev_ftl.a 00:04:34.684 CC module/bdev/raid/raid1.o 00:04:34.684 SO libspdk_bdev_ftl.so.6.0 00:04:34.684 CC module/bdev/raid/concat.o 00:04:34.684 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:34.684 SYMLINK libspdk_bdev_ftl.so 00:04:34.943 LIB libspdk_bdev_iscsi.a 00:04:34.943 LIB libspdk_bdev_virtio.a 00:04:34.943 SO libspdk_bdev_iscsi.so.6.0 00:04:34.943 SO libspdk_bdev_virtio.so.6.0 00:04:34.943 LIB libspdk_bdev_raid.a 00:04:34.943 SYMLINK libspdk_bdev_iscsi.so 00:04:34.943 SYMLINK libspdk_bdev_virtio.so 00:04:34.943 SO libspdk_bdev_raid.so.6.0 00:04:35.200 SYMLINK libspdk_bdev_raid.so 00:04:36.133 LIB libspdk_bdev_nvme.a 00:04:36.390 SO libspdk_bdev_nvme.so.7.0 00:04:36.647 SYMLINK libspdk_bdev_nvme.so 00:04:36.904 CC module/event/subsystems/iobuf/iobuf.o 00:04:36.904 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:36.904 CC module/event/subsystems/keyring/keyring.o 00:04:36.904 CC module/event/subsystems/vmd/vmd.o 00:04:36.904 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:36.904 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:36.904 CC module/event/subsystems/scheduler/scheduler.o 00:04:36.904 CC module/event/subsystems/sock/sock.o 00:04:37.162 LIB libspdk_event_keyring.a 00:04:37.162 LIB libspdk_event_vhost_blk.a 00:04:37.162 LIB libspdk_event_vmd.a 00:04:37.162 LIB libspdk_event_sock.a 00:04:37.162 LIB libspdk_event_iobuf.a 00:04:37.162 LIB libspdk_event_scheduler.a 00:04:37.162 SO libspdk_event_keyring.so.1.0 00:04:37.162 SO libspdk_event_vhost_blk.so.3.0 00:04:37.162 SO libspdk_event_sock.so.5.0 00:04:37.162 SO libspdk_event_scheduler.so.4.0 00:04:37.162 SO libspdk_event_vmd.so.6.0 00:04:37.162 SO libspdk_event_iobuf.so.3.0 00:04:37.420 SYMLINK libspdk_event_sock.so 00:04:37.420 SYMLINK libspdk_event_keyring.so 00:04:37.420 SYMLINK libspdk_event_scheduler.so 00:04:37.420 SYMLINK libspdk_event_vhost_blk.so 00:04:37.420 SYMLINK libspdk_event_vmd.so 00:04:37.420 SYMLINK libspdk_event_iobuf.so 00:04:37.680 CC module/event/subsystems/accel/accel.o 00:04:37.680 LIB libspdk_event_accel.a 00:04:37.938 SO libspdk_event_accel.so.6.0 00:04:37.938 SYMLINK libspdk_event_accel.so 00:04:38.195 CC module/event/subsystems/bdev/bdev.o 00:04:38.454 LIB libspdk_event_bdev.a 00:04:38.454 SO libspdk_event_bdev.so.6.0 00:04:38.454 SYMLINK libspdk_event_bdev.so 00:04:38.712 CC module/event/subsystems/scsi/scsi.o 00:04:38.712 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:38.712 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:38.712 CC module/event/subsystems/ublk/ublk.o 00:04:38.712 CC module/event/subsystems/nbd/nbd.o 00:04:38.970 LIB libspdk_event_nbd.a 00:04:38.970 LIB libspdk_event_scsi.a 00:04:38.970 LIB libspdk_event_ublk.a 00:04:38.970 SO libspdk_event_nbd.so.6.0 00:04:38.970 SO libspdk_event_ublk.so.3.0 00:04:38.970 SO libspdk_event_scsi.so.6.0 00:04:38.970 SYMLINK libspdk_event_ublk.so 00:04:38.970 SYMLINK libspdk_event_nbd.so 00:04:38.970 SYMLINK libspdk_event_scsi.so 00:04:38.970 LIB libspdk_event_nvmf.a 00:04:38.970 SO libspdk_event_nvmf.so.6.0 00:04:39.227 SYMLINK libspdk_event_nvmf.so 00:04:39.227 CC module/event/subsystems/iscsi/iscsi.o 00:04:39.227 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:39.485 LIB libspdk_event_vhost_scsi.a 00:04:39.485 LIB libspdk_event_iscsi.a 00:04:39.485 SO libspdk_event_vhost_scsi.so.3.0 00:04:39.485 SO libspdk_event_iscsi.so.6.0 00:04:39.485 SYMLINK libspdk_event_vhost_scsi.so 00:04:39.485 SYMLINK libspdk_event_iscsi.so 00:04:39.743 SO libspdk.so.6.0 00:04:39.743 SYMLINK libspdk.so 00:04:40.001 CXX app/trace/trace.o 00:04:40.001 CC app/trace_record/trace_record.o 00:04:40.001 CC app/spdk_lspci/spdk_lspci.o 00:04:40.001 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:40.001 CC app/iscsi_tgt/iscsi_tgt.o 00:04:40.001 CC app/nvmf_tgt/nvmf_main.o 00:04:40.001 CC app/spdk_tgt/spdk_tgt.o 00:04:40.001 CC examples/ioat/perf/perf.o 00:04:40.001 CC examples/util/zipf/zipf.o 00:04:40.001 CC test/thread/poller_perf/poller_perf.o 00:04:40.259 LINK spdk_lspci 00:04:40.259 LINK nvmf_tgt 00:04:40.259 LINK iscsi_tgt 00:04:40.259 LINK zipf 00:04:40.259 LINK interrupt_tgt 00:04:40.259 LINK poller_perf 00:04:40.259 LINK spdk_trace_record 00:04:40.259 LINK ioat_perf 00:04:40.517 LINK spdk_tgt 00:04:40.517 LINK spdk_trace 00:04:40.517 CC app/spdk_nvme_perf/perf.o 00:04:40.775 CC examples/ioat/verify/verify.o 00:04:40.775 TEST_HEADER include/spdk/accel.h 00:04:40.775 TEST_HEADER include/spdk/accel_module.h 00:04:40.775 TEST_HEADER include/spdk/assert.h 00:04:40.775 CC app/spdk_nvme_identify/identify.o 00:04:40.775 TEST_HEADER include/spdk/barrier.h 00:04:40.775 TEST_HEADER include/spdk/base64.h 00:04:40.775 TEST_HEADER include/spdk/bdev.h 00:04:40.775 TEST_HEADER include/spdk/bdev_module.h 00:04:40.775 TEST_HEADER include/spdk/bdev_zone.h 00:04:40.775 TEST_HEADER include/spdk/bit_array.h 00:04:40.775 TEST_HEADER include/spdk/bit_pool.h 00:04:40.775 TEST_HEADER include/spdk/blob_bdev.h 00:04:40.775 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:40.775 TEST_HEADER include/spdk/blobfs.h 00:04:40.775 TEST_HEADER include/spdk/blob.h 00:04:40.775 TEST_HEADER include/spdk/conf.h 00:04:40.775 TEST_HEADER include/spdk/config.h 00:04:40.775 TEST_HEADER include/spdk/cpuset.h 00:04:40.775 TEST_HEADER include/spdk/crc16.h 00:04:40.775 TEST_HEADER include/spdk/crc32.h 00:04:40.775 TEST_HEADER include/spdk/crc64.h 00:04:40.775 CC app/spdk_nvme_discover/discovery_aer.o 00:04:40.775 TEST_HEADER include/spdk/dif.h 00:04:40.775 TEST_HEADER include/spdk/dma.h 00:04:40.775 TEST_HEADER include/spdk/endian.h 00:04:40.775 TEST_HEADER include/spdk/env_dpdk.h 00:04:40.775 TEST_HEADER include/spdk/env.h 00:04:40.775 TEST_HEADER include/spdk/event.h 00:04:40.775 TEST_HEADER include/spdk/fd_group.h 00:04:40.775 TEST_HEADER include/spdk/fd.h 00:04:40.775 CC test/dma/test_dma/test_dma.o 00:04:40.775 TEST_HEADER include/spdk/file.h 00:04:40.775 TEST_HEADER include/spdk/ftl.h 00:04:40.775 TEST_HEADER include/spdk/gpt_spec.h 00:04:40.775 TEST_HEADER include/spdk/hexlify.h 00:04:40.775 TEST_HEADER include/spdk/histogram_data.h 00:04:40.775 CC test/app/bdev_svc/bdev_svc.o 00:04:40.775 TEST_HEADER include/spdk/idxd.h 00:04:40.775 TEST_HEADER include/spdk/idxd_spec.h 00:04:40.775 TEST_HEADER include/spdk/init.h 00:04:40.775 TEST_HEADER include/spdk/ioat.h 00:04:40.775 TEST_HEADER include/spdk/ioat_spec.h 00:04:40.775 TEST_HEADER include/spdk/iscsi_spec.h 00:04:40.775 CC examples/sock/hello_world/hello_sock.o 00:04:40.775 TEST_HEADER include/spdk/json.h 00:04:40.775 TEST_HEADER include/spdk/jsonrpc.h 00:04:40.775 TEST_HEADER include/spdk/keyring.h 00:04:40.775 TEST_HEADER include/spdk/keyring_module.h 00:04:40.775 TEST_HEADER include/spdk/likely.h 00:04:40.775 TEST_HEADER include/spdk/log.h 00:04:40.775 TEST_HEADER include/spdk/lvol.h 00:04:40.775 TEST_HEADER include/spdk/memory.h 00:04:40.775 TEST_HEADER include/spdk/mmio.h 00:04:40.775 TEST_HEADER include/spdk/nbd.h 00:04:40.775 TEST_HEADER include/spdk/net.h 00:04:40.775 TEST_HEADER include/spdk/notify.h 00:04:40.775 TEST_HEADER include/spdk/nvme.h 00:04:40.775 TEST_HEADER include/spdk/nvme_intel.h 00:04:40.775 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:40.775 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:40.775 CC examples/thread/thread/thread_ex.o 00:04:40.775 TEST_HEADER include/spdk/nvme_spec.h 00:04:40.775 TEST_HEADER include/spdk/nvme_zns.h 00:04:40.775 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:40.775 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:40.775 TEST_HEADER include/spdk/nvmf.h 00:04:40.775 TEST_HEADER include/spdk/nvmf_spec.h 00:04:40.775 TEST_HEADER include/spdk/nvmf_transport.h 00:04:40.775 TEST_HEADER include/spdk/opal.h 00:04:40.775 TEST_HEADER include/spdk/opal_spec.h 00:04:40.775 TEST_HEADER include/spdk/pci_ids.h 00:04:40.775 TEST_HEADER include/spdk/pipe.h 00:04:40.775 TEST_HEADER include/spdk/queue.h 00:04:40.775 TEST_HEADER include/spdk/reduce.h 00:04:40.775 TEST_HEADER include/spdk/rpc.h 00:04:40.775 TEST_HEADER include/spdk/scheduler.h 00:04:40.775 TEST_HEADER include/spdk/scsi.h 00:04:40.775 TEST_HEADER include/spdk/scsi_spec.h 00:04:40.775 TEST_HEADER include/spdk/sock.h 00:04:40.775 TEST_HEADER include/spdk/stdinc.h 00:04:40.775 TEST_HEADER include/spdk/string.h 00:04:40.775 TEST_HEADER include/spdk/thread.h 00:04:40.775 TEST_HEADER include/spdk/trace.h 00:04:40.775 TEST_HEADER include/spdk/trace_parser.h 00:04:40.775 TEST_HEADER include/spdk/tree.h 00:04:41.033 TEST_HEADER include/spdk/ublk.h 00:04:41.033 TEST_HEADER include/spdk/util.h 00:04:41.033 TEST_HEADER include/spdk/uuid.h 00:04:41.033 TEST_HEADER include/spdk/version.h 00:04:41.033 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:41.033 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:41.033 TEST_HEADER include/spdk/vhost.h 00:04:41.033 TEST_HEADER include/spdk/vmd.h 00:04:41.033 TEST_HEADER include/spdk/xor.h 00:04:41.033 TEST_HEADER include/spdk/zipf.h 00:04:41.033 CXX test/cpp_headers/accel.o 00:04:41.034 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:41.034 LINK verify 00:04:41.034 LINK spdk_nvme_discover 00:04:41.034 LINK bdev_svc 00:04:41.034 CXX test/cpp_headers/accel_module.o 00:04:41.034 LINK hello_sock 00:04:41.034 LINK thread 00:04:41.034 CXX test/cpp_headers/assert.o 00:04:41.291 LINK test_dma 00:04:41.291 CXX test/cpp_headers/barrier.o 00:04:41.549 CC test/app/histogram_perf/histogram_perf.o 00:04:41.549 LINK nvme_fuzz 00:04:41.549 CC test/app/jsoncat/jsoncat.o 00:04:41.549 CXX test/cpp_headers/base64.o 00:04:41.549 CC test/env/mem_callbacks/mem_callbacks.o 00:04:41.549 CC examples/vmd/lsvmd/lsvmd.o 00:04:41.549 CC test/event/event_perf/event_perf.o 00:04:41.549 CC test/event/reactor/reactor.o 00:04:41.806 LINK histogram_perf 00:04:41.806 LINK jsoncat 00:04:41.806 LINK lsvmd 00:04:41.806 LINK spdk_nvme_perf 00:04:41.806 CXX test/cpp_headers/bdev.o 00:04:41.806 LINK event_perf 00:04:41.806 LINK reactor 00:04:41.806 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:41.806 LINK spdk_nvme_identify 00:04:42.064 CXX test/cpp_headers/bdev_module.o 00:04:42.064 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:42.064 CC examples/vmd/led/led.o 00:04:42.064 CC test/event/reactor_perf/reactor_perf.o 00:04:42.064 CC test/app/stub/stub.o 00:04:42.064 CC test/event/app_repeat/app_repeat.o 00:04:42.064 CC test/event/scheduler/scheduler.o 00:04:42.064 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:42.064 CXX test/cpp_headers/bdev_zone.o 00:04:42.322 CC app/spdk_top/spdk_top.o 00:04:42.322 LINK mem_callbacks 00:04:42.322 LINK led 00:04:42.322 LINK reactor_perf 00:04:42.322 LINK stub 00:04:42.322 LINK app_repeat 00:04:42.322 CXX test/cpp_headers/bit_array.o 00:04:42.322 LINK scheduler 00:04:42.322 CXX test/cpp_headers/bit_pool.o 00:04:42.581 CC test/env/vtophys/vtophys.o 00:04:42.581 CXX test/cpp_headers/blob_bdev.o 00:04:42.581 CXX test/cpp_headers/blobfs_bdev.o 00:04:42.581 CXX test/cpp_headers/blobfs.o 00:04:42.581 CC examples/idxd/perf/perf.o 00:04:42.581 LINK vtophys 00:04:42.581 LINK vhost_fuzz 00:04:42.839 CC app/vhost/vhost.o 00:04:42.839 CXX test/cpp_headers/blob.o 00:04:42.839 CC test/rpc_client/rpc_client_test.o 00:04:42.839 CXX test/cpp_headers/conf.o 00:04:42.839 LINK vhost 00:04:42.839 CC examples/accel/perf/accel_perf.o 00:04:43.096 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:43.096 LINK idxd_perf 00:04:43.096 CC app/spdk_dd/spdk_dd.o 00:04:43.096 CC test/accel/dif/dif.o 00:04:43.096 LINK rpc_client_test 00:04:43.096 CXX test/cpp_headers/config.o 00:04:43.096 CXX test/cpp_headers/cpuset.o 00:04:43.096 LINK env_dpdk_post_init 00:04:43.353 CXX test/cpp_headers/crc16.o 00:04:43.353 LINK spdk_top 00:04:43.353 CC app/fio/nvme/fio_plugin.o 00:04:43.353 CC test/env/memory/memory_ut.o 00:04:43.611 LINK spdk_dd 00:04:43.611 CC test/blobfs/mkfs/mkfs.o 00:04:43.611 CXX test/cpp_headers/crc32.o 00:04:43.611 LINK accel_perf 00:04:43.611 CC test/lvol/esnap/esnap.o 00:04:43.611 LINK dif 00:04:43.611 CXX test/cpp_headers/crc64.o 00:04:43.869 LINK mkfs 00:04:43.869 CC app/fio/bdev/fio_plugin.o 00:04:43.869 CXX test/cpp_headers/dif.o 00:04:43.869 CC test/nvme/aer/aer.o 00:04:44.127 CC test/nvme/reset/reset.o 00:04:44.127 CC examples/blob/hello_world/hello_blob.o 00:04:44.127 CC test/env/pci/pci_ut.o 00:04:44.127 CXX test/cpp_headers/dma.o 00:04:44.127 LINK iscsi_fuzz 00:04:44.127 LINK spdk_nvme 00:04:44.385 CXX test/cpp_headers/endian.o 00:04:44.385 LINK hello_blob 00:04:44.385 LINK reset 00:04:44.385 LINK aer 00:04:44.385 CC examples/blob/cli/blobcli.o 00:04:44.385 LINK spdk_bdev 00:04:44.385 CXX test/cpp_headers/env_dpdk.o 00:04:44.643 CXX test/cpp_headers/env.o 00:04:44.643 CXX test/cpp_headers/event.o 00:04:44.643 LINK pci_ut 00:04:44.643 CXX test/cpp_headers/fd_group.o 00:04:44.643 CC test/nvme/sgl/sgl.o 00:04:44.643 CC test/nvme/e2edp/nvme_dp.o 00:04:44.643 CXX test/cpp_headers/fd.o 00:04:44.643 CC test/bdev/bdevio/bdevio.o 00:04:44.900 LINK memory_ut 00:04:44.900 CC test/nvme/overhead/overhead.o 00:04:44.900 CXX test/cpp_headers/file.o 00:04:44.900 LINK sgl 00:04:44.900 LINK nvme_dp 00:04:44.900 CC examples/nvme/hello_world/hello_world.o 00:04:44.900 CC examples/nvme/reconnect/reconnect.o 00:04:44.900 LINK blobcli 00:04:45.158 CXX test/cpp_headers/ftl.o 00:04:45.158 CC test/nvme/err_injection/err_injection.o 00:04:45.158 LINK overhead 00:04:45.158 CC test/nvme/startup/startup.o 00:04:45.158 LINK hello_world 00:04:45.158 LINK bdevio 00:04:45.158 CC test/nvme/reserve/reserve.o 00:04:45.416 CXX test/cpp_headers/gpt_spec.o 00:04:45.416 CC test/nvme/simple_copy/simple_copy.o 00:04:45.416 LINK reconnect 00:04:45.416 LINK startup 00:04:45.416 LINK err_injection 00:04:45.416 CXX test/cpp_headers/hexlify.o 00:04:45.416 CC test/nvme/connect_stress/connect_stress.o 00:04:45.416 LINK reserve 00:04:45.675 CC test/nvme/boot_partition/boot_partition.o 00:04:45.675 CC test/nvme/compliance/nvme_compliance.o 00:04:45.675 LINK simple_copy 00:04:45.675 CXX test/cpp_headers/histogram_data.o 00:04:45.675 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:45.675 LINK connect_stress 00:04:45.675 CC test/nvme/fused_ordering/fused_ordering.o 00:04:45.675 LINK boot_partition 00:04:45.960 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:45.960 CC examples/bdev/hello_world/hello_bdev.o 00:04:45.960 CXX test/cpp_headers/idxd.o 00:04:45.960 LINK fused_ordering 00:04:45.960 CC examples/bdev/bdevperf/bdevperf.o 00:04:45.960 CC test/nvme/fdp/fdp.o 00:04:45.960 CXX test/cpp_headers/idxd_spec.o 00:04:45.960 LINK nvme_compliance 00:04:46.228 LINK doorbell_aers 00:04:46.228 CC test/nvme/cuse/cuse.o 00:04:46.228 LINK hello_bdev 00:04:46.228 CXX test/cpp_headers/init.o 00:04:46.228 CXX test/cpp_headers/ioat.o 00:04:46.486 LINK nvme_manage 00:04:46.486 CC examples/nvme/arbitration/arbitration.o 00:04:46.486 CC examples/nvme/hotplug/hotplug.o 00:04:46.486 CXX test/cpp_headers/ioat_spec.o 00:04:46.486 LINK fdp 00:04:46.486 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:46.486 CC examples/nvme/abort/abort.o 00:04:46.744 CXX test/cpp_headers/iscsi_spec.o 00:04:46.744 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:46.744 LINK hotplug 00:04:46.744 CXX test/cpp_headers/json.o 00:04:46.744 LINK cmb_copy 00:04:46.744 CXX test/cpp_headers/jsonrpc.o 00:04:46.744 LINK pmr_persistence 00:04:47.002 CXX test/cpp_headers/keyring.o 00:04:47.002 LINK arbitration 00:04:47.002 CXX test/cpp_headers/keyring_module.o 00:04:47.002 CXX test/cpp_headers/likely.o 00:04:47.002 LINK abort 00:04:47.002 LINK bdevperf 00:04:47.002 CXX test/cpp_headers/log.o 00:04:47.002 CXX test/cpp_headers/lvol.o 00:04:47.002 CXX test/cpp_headers/memory.o 00:04:47.002 CXX test/cpp_headers/mmio.o 00:04:47.259 CXX test/cpp_headers/nbd.o 00:04:47.259 CXX test/cpp_headers/net.o 00:04:47.259 CXX test/cpp_headers/notify.o 00:04:47.259 CXX test/cpp_headers/nvme.o 00:04:47.259 CXX test/cpp_headers/nvme_intel.o 00:04:47.259 CXX test/cpp_headers/nvme_ocssd.o 00:04:47.259 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:47.259 CXX test/cpp_headers/nvme_spec.o 00:04:47.259 CXX test/cpp_headers/nvme_zns.o 00:04:47.259 CXX test/cpp_headers/nvmf_cmd.o 00:04:47.517 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:47.517 CXX test/cpp_headers/nvmf.o 00:04:47.517 CXX test/cpp_headers/nvmf_spec.o 00:04:47.517 CXX test/cpp_headers/nvmf_transport.o 00:04:47.517 CXX test/cpp_headers/opal.o 00:04:47.517 CXX test/cpp_headers/opal_spec.o 00:04:47.517 CXX test/cpp_headers/pci_ids.o 00:04:47.517 CC examples/nvmf/nvmf/nvmf.o 00:04:47.775 CXX test/cpp_headers/pipe.o 00:04:47.775 CXX test/cpp_headers/queue.o 00:04:47.775 LINK cuse 00:04:47.775 CXX test/cpp_headers/reduce.o 00:04:47.775 CXX test/cpp_headers/rpc.o 00:04:47.775 CXX test/cpp_headers/scheduler.o 00:04:47.775 CXX test/cpp_headers/scsi.o 00:04:47.775 CXX test/cpp_headers/scsi_spec.o 00:04:47.775 CXX test/cpp_headers/sock.o 00:04:47.775 CXX test/cpp_headers/stdinc.o 00:04:48.033 CXX test/cpp_headers/string.o 00:04:48.033 CXX test/cpp_headers/thread.o 00:04:48.033 CXX test/cpp_headers/trace.o 00:04:48.033 LINK nvmf 00:04:48.033 CXX test/cpp_headers/trace_parser.o 00:04:48.033 CXX test/cpp_headers/tree.o 00:04:48.033 CXX test/cpp_headers/ublk.o 00:04:48.033 CXX test/cpp_headers/util.o 00:04:48.033 CXX test/cpp_headers/uuid.o 00:04:48.033 CXX test/cpp_headers/version.o 00:04:48.033 CXX test/cpp_headers/vfio_user_pci.o 00:04:48.033 CXX test/cpp_headers/vfio_user_spec.o 00:04:48.290 CXX test/cpp_headers/vhost.o 00:04:48.290 CXX test/cpp_headers/vmd.o 00:04:48.290 CXX test/cpp_headers/xor.o 00:04:48.290 CXX test/cpp_headers/zipf.o 00:04:50.859 LINK esnap 00:04:51.425 00:04:51.425 real 1m27.899s 00:04:51.425 user 8m37.660s 00:04:51.425 sys 1m56.935s 00:04:51.425 11:29:50 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:51.426 11:29:50 make -- common/autotest_common.sh@10 -- $ set +x 00:04:51.426 ************************************ 00:04:51.426 END TEST make 00:04:51.426 ************************************ 00:04:51.426 11:29:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:51.426 11:29:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:51.426 11:29:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:51.426 11:29:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.426 11:29:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:51.426 11:29:50 -- pm/common@44 -- $ pid=5341 00:04:51.426 11:29:50 -- pm/common@50 -- $ kill -TERM 5341 00:04:51.426 11:29:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.426 11:29:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:51.426 11:29:50 -- pm/common@44 -- $ pid=5343 00:04:51.426 11:29:50 -- pm/common@50 -- $ kill -TERM 5343 00:04:51.426 11:29:50 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.426 11:29:50 -- nvmf/common.sh@7 -- # uname -s 00:04:51.426 11:29:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.426 11:29:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.426 11:29:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.426 11:29:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.426 11:29:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.426 11:29:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.426 11:29:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.426 11:29:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.426 11:29:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.426 11:29:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.426 11:29:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3a80fdc5-55c1-4700-bb2d-5636737b542b 00:04:51.426 11:29:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=3a80fdc5-55c1-4700-bb2d-5636737b542b 00:04:51.426 11:29:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.426 11:29:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.426 11:29:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.426 11:29:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.426 11:29:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.426 11:29:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.426 11:29:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.426 11:29:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.426 11:29:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.426 11:29:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.426 11:29:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.426 11:29:50 -- paths/export.sh@5 -- # export PATH 00:04:51.426 11:29:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.426 11:29:50 -- nvmf/common.sh@47 -- # : 0 00:04:51.426 11:29:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:51.426 11:29:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:51.426 11:29:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.426 11:29:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.426 11:29:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.426 11:29:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:51.426 11:29:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:51.426 11:29:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:51.426 11:29:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:51.426 11:29:50 -- spdk/autotest.sh@32 -- # uname -s 00:04:51.426 11:29:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:51.426 11:29:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:51.426 11:29:50 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:51.426 11:29:50 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:51.426 11:29:50 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:51.426 11:29:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:51.426 11:29:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:51.426 11:29:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:51.426 11:29:50 -- spdk/autotest.sh@48 -- # udevadm_pid=54009 00:04:51.426 11:29:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:51.426 11:29:50 -- pm/common@17 -- # local monitor 00:04:51.426 11:29:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.426 11:29:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.426 11:29:50 -- pm/common@21 -- # date +%s 00:04:51.426 11:29:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:51.426 11:29:50 -- pm/common@21 -- # date +%s 00:04:51.426 11:29:50 -- pm/common@25 -- # sleep 1 00:04:51.426 11:29:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721906990 00:04:51.426 11:29:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721906990 00:04:51.426 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721906990_collect-cpu-load.pm.log 00:04:51.426 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721906990_collect-vmstat.pm.log 00:04:52.361 11:29:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:52.361 11:29:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:52.361 11:29:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.361 11:29:51 -- common/autotest_common.sh@10 -- # set +x 00:04:52.361 11:29:51 -- spdk/autotest.sh@59 -- # create_test_list 00:04:52.361 11:29:51 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:52.361 11:29:51 -- common/autotest_common.sh@10 -- # set +x 00:04:52.619 11:29:51 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:52.619 11:29:51 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:52.619 11:29:51 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:52.619 11:29:51 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:52.619 11:29:51 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:52.619 11:29:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:52.619 11:29:51 -- common/autotest_common.sh@1455 -- # uname 00:04:52.619 11:29:51 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:52.619 11:29:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:52.619 11:29:51 -- common/autotest_common.sh@1475 -- # uname 00:04:52.619 11:29:51 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:52.619 11:29:51 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:52.619 11:29:51 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:52.619 11:29:51 -- spdk/autotest.sh@72 -- # hash lcov 00:04:52.619 11:29:51 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:52.619 11:29:51 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:52.619 --rc lcov_branch_coverage=1 00:04:52.619 --rc lcov_function_coverage=1 00:04:52.619 --rc genhtml_branch_coverage=1 00:04:52.619 --rc genhtml_function_coverage=1 00:04:52.619 --rc genhtml_legend=1 00:04:52.619 --rc geninfo_all_blocks=1 00:04:52.619 ' 00:04:52.619 11:29:51 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:52.619 --rc lcov_branch_coverage=1 00:04:52.619 --rc lcov_function_coverage=1 00:04:52.619 --rc genhtml_branch_coverage=1 00:04:52.619 --rc genhtml_function_coverage=1 00:04:52.619 --rc genhtml_legend=1 00:04:52.619 --rc geninfo_all_blocks=1 00:04:52.619 ' 00:04:52.619 11:29:51 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:52.619 --rc lcov_branch_coverage=1 00:04:52.619 --rc lcov_function_coverage=1 00:04:52.619 --rc genhtml_branch_coverage=1 00:04:52.619 --rc genhtml_function_coverage=1 00:04:52.619 --rc genhtml_legend=1 00:04:52.619 --rc geninfo_all_blocks=1 00:04:52.619 --no-external' 00:04:52.619 11:29:51 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:52.619 --rc lcov_branch_coverage=1 00:04:52.619 --rc lcov_function_coverage=1 00:04:52.619 --rc genhtml_branch_coverage=1 00:04:52.619 --rc genhtml_function_coverage=1 00:04:52.619 --rc genhtml_legend=1 00:04:52.619 --rc geninfo_all_blocks=1 00:04:52.619 --no-external' 00:04:52.619 11:29:51 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:52.619 lcov: LCOV version 1.14 00:04:52.619 11:29:51 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:10.693 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:10.693 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:22.955 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:22.955 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:22.955 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:22.955 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:22.955 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:22.955 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:22.955 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:22.955 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:22.955 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:22.955 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:22.955 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:22.955 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:22.955 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:22.955 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:22.955 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:22.955 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:22.955 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:22.955 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:22.956 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:22.956 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:22.957 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:22.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:25.506 11:30:24 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:25.506 11:30:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.506 11:30:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.506 11:30:24 -- spdk/autotest.sh@91 -- # rm -f 00:05:25.506 11:30:24 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.329 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:26.588 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:26.588 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:26.588 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:26.588 11:30:25 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:26.588 11:30:25 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:26.588 11:30:25 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:26.588 11:30:25 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:26.588 11:30:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:26.588 11:30:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:26.588 11:30:25 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:26.588 11:30:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:26.588 11:30:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:26.588 11:30:25 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:26.588 11:30:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:26.588 11:30:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:26.588 11:30:25 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:26.588 11:30:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:26.588 11:30:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:26.588 11:30:25 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:26.588 11:30:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:26.588 11:30:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:26.588 11:30:25 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:26.588 11:30:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:26.588 11:30:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:26.588 11:30:25 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:26.588 11:30:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:26.588 11:30:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:26.588 11:30:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:26.588 11:30:25 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:26.589 11:30:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:26.589 11:30:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:26.589 11:30:25 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:26.589 11:30:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.589 11:30:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:26.589 11:30:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:26.589 11:30:25 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:26.589 11:30:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:26.589 No valid GPT data, bailing 00:05:26.589 11:30:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:26.589 11:30:25 -- scripts/common.sh@391 -- # pt= 00:05:26.589 11:30:25 -- scripts/common.sh@392 -- # return 1 00:05:26.589 11:30:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:26.589 1+0 records in 00:05:26.589 1+0 records out 00:05:26.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142924 s, 73.4 MB/s 00:05:26.589 11:30:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.589 11:30:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:26.589 11:30:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:26.589 11:30:25 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:26.589 11:30:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:26.589 No valid GPT data, bailing 00:05:26.589 11:30:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:26.589 11:30:25 -- scripts/common.sh@391 -- # pt= 00:05:26.589 11:30:25 -- scripts/common.sh@392 -- # return 1 00:05:26.589 11:30:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:26.847 1+0 records in 00:05:26.847 1+0 records out 00:05:26.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448734 s, 234 MB/s 00:05:26.847 11:30:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.847 11:30:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:26.847 11:30:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:26.847 11:30:25 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:26.847 11:30:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:26.847 No valid GPT data, bailing 00:05:26.847 11:30:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:26.847 11:30:25 -- scripts/common.sh@391 -- # pt= 00:05:26.847 11:30:25 -- scripts/common.sh@392 -- # return 1 00:05:26.847 11:30:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:26.847 1+0 records in 00:05:26.847 1+0 records out 00:05:26.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00336039 s, 312 MB/s 00:05:26.847 11:30:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.847 11:30:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:26.847 11:30:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:05:26.847 11:30:25 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:05:26.847 11:30:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:26.847 No valid GPT data, bailing 00:05:26.847 11:30:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:26.847 11:30:25 -- scripts/common.sh@391 -- # pt= 00:05:26.847 11:30:25 -- scripts/common.sh@392 -- # return 1 00:05:26.847 11:30:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:26.847 1+0 records in 00:05:26.847 1+0 records out 00:05:26.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473796 s, 221 MB/s 00:05:26.847 11:30:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.847 11:30:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:26.847 11:30:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:05:26.847 11:30:25 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:05:26.847 11:30:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:26.847 No valid GPT data, bailing 00:05:26.847 11:30:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:26.847 11:30:25 -- scripts/common.sh@391 -- # pt= 00:05:26.847 11:30:25 -- scripts/common.sh@392 -- # return 1 00:05:26.847 11:30:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:27.105 1+0 records in 00:05:27.105 1+0 records out 00:05:27.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047333 s, 222 MB/s 00:05:27.105 11:30:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.105 11:30:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.105 11:30:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:27.105 11:30:25 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:27.105 11:30:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:27.105 No valid GPT data, bailing 00:05:27.105 11:30:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:27.105 11:30:25 -- scripts/common.sh@391 -- # pt= 00:05:27.105 11:30:25 -- scripts/common.sh@392 -- # return 1 00:05:27.105 11:30:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:27.105 1+0 records in 00:05:27.105 1+0 records out 00:05:27.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534689 s, 196 MB/s 00:05:27.105 11:30:25 -- spdk/autotest.sh@118 -- # sync 00:05:27.105 11:30:26 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:27.105 11:30:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:27.105 11:30:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:29.005 11:30:27 -- spdk/autotest.sh@124 -- # uname -s 00:05:29.005 11:30:27 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:29.005 11:30:27 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:29.005 11:30:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.005 11:30:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.005 11:30:27 -- common/autotest_common.sh@10 -- # set +x 00:05:29.005 ************************************ 00:05:29.005 START TEST setup.sh 00:05:29.005 ************************************ 00:05:29.005 11:30:27 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:29.005 * Looking for test storage... 00:05:29.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.005 11:30:27 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:29.005 11:30:27 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:29.005 11:30:27 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:29.005 11:30:27 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.005 11:30:27 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.005 11:30:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:29.005 ************************************ 00:05:29.005 START TEST acl 00:05:29.005 ************************************ 00:05:29.005 11:30:27 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:29.005 * Looking for test storage... 00:05:29.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.005 11:30:28 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:29.005 11:30:28 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:29.005 11:30:28 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:29.005 11:30:28 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:29.005 11:30:28 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:29.005 11:30:28 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:29.005 11:30:28 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:29.005 11:30:28 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.005 11:30:28 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.379 11:30:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:30.379 11:30:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:30.379 11:30:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.379 11:30:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:30.379 11:30:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.379 11:30:29 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:30.637 11:30:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:30.637 11:30:29 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:30.637 11:30:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.204 Hugepages 00:05:31.204 node hugesize free / total 00:05:31.204 11:30:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:31.204 11:30:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:31.204 11:30:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.204 00:05:31.204 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:31.204 11:30:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:31.204 11:30:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:31.204 11:30:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.204 11:30:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:31.204 11:30:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:31.204 11:30:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:31.204 11:30:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.462 11:30:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.720 11:30:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:31.720 11:30:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.720 11:30:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:31.720 11:30:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.720 11:30:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.720 11:30:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.720 11:30:30 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:31.720 11:30:30 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:31.720 11:30:30 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.720 11:30:30 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.720 11:30:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:31.720 ************************************ 00:05:31.720 START TEST denied 00:05:31.720 ************************************ 00:05:31.720 11:30:30 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:05:31.720 11:30:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:31.720 11:30:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:31.720 11:30:30 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:31.720 11:30:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.721 11:30:30 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.103 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:33.103 11:30:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:33.103 11:30:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:33.103 11:30:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:33.103 11:30:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:33.103 11:30:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:33.103 11:30:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:33.103 11:30:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:33.103 11:30:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:33.103 11:30:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.103 11:30:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.677 00:05:39.677 real 0m7.304s 00:05:39.677 user 0m0.903s 00:05:39.677 sys 0m1.459s 00:05:39.677 11:30:37 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.677 11:30:37 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:39.677 ************************************ 00:05:39.677 END TEST denied 00:05:39.677 ************************************ 00:05:39.677 11:30:37 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:39.677 11:30:37 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.677 11:30:37 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.677 11:30:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:39.677 ************************************ 00:05:39.677 START TEST allowed 00:05:39.677 ************************************ 00:05:39.677 11:30:37 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:39.677 11:30:37 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:39.677 11:30:37 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:39.677 11:30:37 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:39.677 11:30:37 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.677 11:30:37 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:40.243 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:40.243 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:40.244 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.244 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.244 11:30:39 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:40.244 11:30:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.244 11:30:39 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.621 00:05:41.621 real 0m2.351s 00:05:41.621 user 0m1.054s 00:05:41.621 sys 0m1.273s 00:05:41.621 11:30:40 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.621 ************************************ 00:05:41.621 END TEST allowed 00:05:41.621 ************************************ 00:05:41.621 11:30:40 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:41.621 ************************************ 00:05:41.621 END TEST acl 00:05:41.621 ************************************ 00:05:41.621 00:05:41.621 real 0m12.368s 00:05:41.621 user 0m3.223s 00:05:41.621 sys 0m4.181s 00:05:41.621 11:30:40 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.621 11:30:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:41.621 11:30:40 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:41.621 11:30:40 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.621 11:30:40 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.621 11:30:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:41.621 ************************************ 00:05:41.621 START TEST hugepages 00:05:41.621 ************************************ 00:05:41.621 11:30:40 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:41.621 * Looking for test storage... 00:05:41.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:41.621 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:41.621 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:41.621 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:41.621 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:41.621 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:41.621 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:41.621 11:30:40 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5809168 kB' 'MemAvailable: 7410828 kB' 'Buffers: 2436 kB' 'Cached: 1815020 kB' 'SwapCached: 0 kB' 'Active: 445432 kB' 'Inactive: 1474960 kB' 'Active(anon): 113448 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474960 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 104392 kB' 'Mapped: 48824 kB' 'Shmem: 10512 kB' 'KReclaimable: 63312 kB' 'Slab: 136092 kB' 'SReclaimable: 63312 kB' 'SUnreclaim: 72780 kB' 'KernelStack: 6380 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 326844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.622 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:41.623 11:30:40 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:41.624 11:30:40 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.624 11:30:40 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.624 11:30:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:41.624 ************************************ 00:05:41.624 START TEST default_setup 00:05:41.624 ************************************ 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.624 11:30:40 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.777 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.777 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.777 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913140 kB' 'MemAvailable: 9514596 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 462072 kB' 'Inactive: 1474988 kB' 'Active(anon): 130088 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121160 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135728 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72880 kB' 'KernelStack: 6304 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.777 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:42.778 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913140 kB' 'MemAvailable: 9514596 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 461996 kB' 'Inactive: 1474988 kB' 'Active(anon): 130012 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121120 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135720 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72872 kB' 'KernelStack: 6304 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.041 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.042 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7912640 kB' 'MemAvailable: 9514096 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 461976 kB' 'Inactive: 1474988 kB' 'Active(anon): 129992 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121096 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135720 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72872 kB' 'KernelStack: 6304 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.043 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.044 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:43.045 nr_hugepages=1024 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.045 resv_hugepages=0 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.045 surplus_hugepages=0 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.045 anon_hugepages=0 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.045 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7912916 kB' 'MemAvailable: 9514372 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 461968 kB' 'Inactive: 1474988 kB' 'Active(anon): 129984 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121096 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135720 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72872 kB' 'KernelStack: 6304 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.046 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:43.047 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7912916 kB' 'MemUsed: 4329056 kB' 'SwapCached: 0 kB' 'Active: 462216 kB' 'Inactive: 1474988 kB' 'Active(anon): 130232 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 1817444 kB' 'Mapped: 48624 kB' 'AnonPages: 121080 kB' 'Shmem: 10472 kB' 'KernelStack: 6304 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62848 kB' 'Slab: 135716 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.048 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:43.049 node0=1024 expecting 1024 00:05:43.049 ************************************ 00:05:43.049 END TEST default_setup 00:05:43.049 ************************************ 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:43.049 00:05:43.049 real 0m1.477s 00:05:43.049 user 0m0.618s 00:05:43.049 sys 0m0.815s 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.049 11:30:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:43.049 11:30:42 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:43.049 11:30:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.049 11:30:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.049 11:30:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:43.049 ************************************ 00:05:43.049 START TEST per_node_1G_alloc 00:05:43.049 ************************************ 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.049 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.619 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.619 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.619 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.619 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.619 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8961260 kB' 'MemAvailable: 10562716 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 462252 kB' 'Inactive: 1474988 kB' 'Active(anon): 130268 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121340 kB' 'Mapped: 48840 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135656 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72808 kB' 'KernelStack: 6312 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.619 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.620 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8961516 kB' 'MemAvailable: 10562972 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 462168 kB' 'Inactive: 1474988 kB' 'Active(anon): 130184 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 121292 kB' 'Mapped: 48684 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135672 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72824 kB' 'KernelStack: 6304 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.621 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.622 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.623 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8962728 kB' 'MemAvailable: 10564184 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 461976 kB' 'Inactive: 1474988 kB' 'Active(anon): 129992 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 121396 kB' 'Mapped: 48684 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135672 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72824 kB' 'KernelStack: 6320 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.900 nr_hugepages=512 00:05:43.900 resv_hugepages=0 00:05:43.900 surplus_hugepages=0 00:05:43.900 anon_hugepages=0 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8963348 kB' 'MemAvailable: 10564804 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 461972 kB' 'Inactive: 1474988 kB' 'Active(anon): 129988 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 121144 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135600 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72752 kB' 'KernelStack: 6288 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.903 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8963348 kB' 'MemUsed: 3278624 kB' 'SwapCached: 0 kB' 'Active: 462068 kB' 'Inactive: 1474988 kB' 'Active(anon): 130084 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 1817444 kB' 'Mapped: 48624 kB' 'AnonPages: 121212 kB' 'Shmem: 10472 kB' 'KernelStack: 6304 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62848 kB' 'Slab: 135600 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.904 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.905 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.906 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.907 node0=512 expecting 512 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:43.907 00:05:43.907 real 0m0.735s 00:05:43.907 user 0m0.341s 00:05:43.907 sys 0m0.424s 00:05:43.907 ************************************ 00:05:43.907 END TEST per_node_1G_alloc 00:05:43.907 ************************************ 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.907 11:30:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:43.907 11:30:42 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:43.907 11:30:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.907 11:30:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.907 11:30:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:43.907 ************************************ 00:05:43.907 START TEST even_2G_alloc 00:05:43.907 ************************************ 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:43.907 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.908 11:30:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.428 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.428 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.428 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.428 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7912076 kB' 'MemAvailable: 9513532 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 462096 kB' 'Inactive: 1474988 kB' 'Active(anon): 130112 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121208 kB' 'Mapped: 48756 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135564 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6264 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.428 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.429 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7912076 kB' 'MemAvailable: 9513532 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 461864 kB' 'Inactive: 1474988 kB' 'Active(anon): 129880 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121272 kB' 'Mapped: 48756 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135564 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6280 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.430 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7912328 kB' 'MemAvailable: 9513784 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 461764 kB' 'Inactive: 1474988 kB' 'Active(anon): 129780 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121136 kB' 'Mapped: 48628 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135592 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72744 kB' 'KernelStack: 6288 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.432 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.433 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:44.434 nr_hugepages=1024 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:44.434 resv_hugepages=0 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.434 surplus_hugepages=0 00:05:44.434 anon_hugepages=0 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.434 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7912328 kB' 'MemAvailable: 9513784 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 461816 kB' 'Inactive: 1474988 kB' 'Active(anon): 129832 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121224 kB' 'Mapped: 48628 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135592 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72744 kB' 'KernelStack: 6304 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.695 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.696 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7912580 kB' 'MemUsed: 4329392 kB' 'SwapCached: 0 kB' 'Active: 461884 kB' 'Inactive: 1474984 kB' 'Active(anon): 129900 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1817440 kB' 'Mapped: 48628 kB' 'AnonPages: 121324 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62848 kB' 'Slab: 135580 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.697 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.698 node0=1024 expecting 1024 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:44.698 00:05:44.698 real 0m0.711s 00:05:44.698 user 0m0.299s 00:05:44.698 sys 0m0.442s 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.698 11:30:43 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:44.698 ************************************ 00:05:44.698 END TEST even_2G_alloc 00:05:44.698 ************************************ 00:05:44.698 11:30:43 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:44.698 11:30:43 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.698 11:30:43 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.698 11:30:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:44.698 ************************************ 00:05:44.698 START TEST odd_alloc 00:05:44.698 ************************************ 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.698 11:30:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.229 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.229 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.229 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.229 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.229 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:45.229 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:45.229 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:45.229 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7914232 kB' 'MemAvailable: 9515688 kB' 'Buffers: 2436 kB' 'Cached: 1815008 kB' 'SwapCached: 0 kB' 'Active: 462632 kB' 'Inactive: 1474988 kB' 'Active(anon): 130648 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121772 kB' 'Mapped: 48836 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135548 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72700 kB' 'KernelStack: 6244 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.230 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7914268 kB' 'MemAvailable: 9515728 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 462048 kB' 'Inactive: 1474992 kB' 'Active(anon): 130064 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121204 kB' 'Mapped: 48628 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135548 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72700 kB' 'KernelStack: 6288 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.231 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.232 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7914804 kB' 'MemAvailable: 9516264 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 462068 kB' 'Inactive: 1474992 kB' 'Active(anon): 130084 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121260 kB' 'Mapped: 48628 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135548 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72700 kB' 'KernelStack: 6304 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.233 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.504 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:45.505 nr_hugepages=1025 00:05:45.505 resv_hugepages=0 00:05:45.505 surplus_hugepages=0 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:45.505 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:45.506 anon_hugepages=0 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915352 kB' 'MemAvailable: 9516812 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 462076 kB' 'Inactive: 1474992 kB' 'Active(anon): 130092 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121264 kB' 'Mapped: 48628 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135532 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72684 kB' 'KernelStack: 6304 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 346380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.506 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.507 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915716 kB' 'MemUsed: 4326256 kB' 'SwapCached: 0 kB' 'Active: 461740 kB' 'Inactive: 1474992 kB' 'Active(anon): 129756 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1817448 kB' 'Mapped: 48628 kB' 'AnonPages: 121152 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62848 kB' 'Slab: 135532 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.508 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:45.509 node0=1025 expecting 1025 00:05:45.509 ************************************ 00:05:45.509 END TEST odd_alloc 00:05:45.509 ************************************ 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:45.509 00:05:45.509 real 0m0.773s 00:05:45.509 user 0m0.349s 00:05:45.509 sys 0m0.434s 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.509 11:30:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:45.509 11:30:44 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:45.509 11:30:44 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.509 11:30:44 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.509 11:30:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:45.509 ************************************ 00:05:45.509 START TEST custom_alloc 00:05:45.509 ************************************ 00:05:45.509 11:30:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:45.509 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:45.509 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:45.509 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:45.509 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:45.509 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:45.509 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.510 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.028 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.028 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.028 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.028 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8970828 kB' 'MemAvailable: 10572292 kB' 'Buffers: 2436 kB' 'Cached: 1815016 kB' 'SwapCached: 0 kB' 'Active: 459620 kB' 'Inactive: 1474996 kB' 'Active(anon): 127636 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118716 kB' 'Mapped: 48080 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135356 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72512 kB' 'KernelStack: 6248 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.028 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.029 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8970972 kB' 'MemAvailable: 10572436 kB' 'Buffers: 2436 kB' 'Cached: 1815016 kB' 'SwapCached: 0 kB' 'Active: 459152 kB' 'Inactive: 1474996 kB' 'Active(anon): 127168 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118588 kB' 'Mapped: 47888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135324 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72480 kB' 'KernelStack: 6240 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.030 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.031 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8970972 kB' 'MemAvailable: 10572436 kB' 'Buffers: 2436 kB' 'Cached: 1815016 kB' 'SwapCached: 0 kB' 'Active: 459196 kB' 'Inactive: 1474996 kB' 'Active(anon): 127212 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118584 kB' 'Mapped: 47888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135324 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72480 kB' 'KernelStack: 6240 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.032 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.294 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:46.295 nr_hugepages=512 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.295 resv_hugepages=0 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.295 surplus_hugepages=0 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.295 anon_hugepages=0 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.295 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8970972 kB' 'MemAvailable: 10572436 kB' 'Buffers: 2436 kB' 'Cached: 1815016 kB' 'SwapCached: 0 kB' 'Active: 459392 kB' 'Inactive: 1474996 kB' 'Active(anon): 127408 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118524 kB' 'Mapped: 47888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135324 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72480 kB' 'KernelStack: 6224 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.296 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.297 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8970720 kB' 'MemUsed: 3271252 kB' 'SwapCached: 0 kB' 'Active: 459240 kB' 'Inactive: 1474996 kB' 'Active(anon): 127256 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1817452 kB' 'Mapped: 47888 kB' 'AnonPages: 118580 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62844 kB' 'Slab: 135324 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.298 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:46.299 node0=512 expecting 512 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:46.299 00:05:46.299 real 0m0.764s 00:05:46.299 user 0m0.352s 00:05:46.299 sys 0m0.424s 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.299 11:30:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:46.299 ************************************ 00:05:46.299 END TEST custom_alloc 00:05:46.299 ************************************ 00:05:46.299 11:30:45 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:46.299 11:30:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.299 11:30:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.299 11:30:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:46.299 ************************************ 00:05:46.299 START TEST no_shrink_alloc 00:05:46.299 ************************************ 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.299 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.818 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.818 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.818 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.818 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.818 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7925036 kB' 'MemAvailable: 9526496 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 459820 kB' 'Inactive: 1474992 kB' 'Active(anon): 127836 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118980 kB' 'Mapped: 48056 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135308 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72464 kB' 'KernelStack: 6280 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.819 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7925036 kB' 'MemAvailable: 9526496 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 459884 kB' 'Inactive: 1474992 kB' 'Active(anon): 127900 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119024 kB' 'Mapped: 48064 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135308 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72464 kB' 'KernelStack: 6248 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.820 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.821 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7924784 kB' 'MemAvailable: 9526244 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 459492 kB' 'Inactive: 1474992 kB' 'Active(anon): 127508 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118608 kB' 'Mapped: 47888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135340 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72496 kB' 'KernelStack: 6240 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.822 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.823 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:46.824 nr_hugepages=1024 00:05:46.824 resv_hugepages=0 00:05:46.824 surplus_hugepages=0 00:05:46.824 anon_hugepages=0 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.824 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.825 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.084 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.084 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7924784 kB' 'MemAvailable: 9526244 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 459392 kB' 'Inactive: 1474992 kB' 'Active(anon): 127408 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118508 kB' 'Mapped: 47888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135340 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72496 kB' 'KernelStack: 6224 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.085 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.086 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7924784 kB' 'MemUsed: 4317188 kB' 'SwapCached: 0 kB' 'Active: 459392 kB' 'Inactive: 1474992 kB' 'Active(anon): 127408 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1817448 kB' 'Mapped: 47888 kB' 'AnonPages: 118500 kB' 'Shmem: 10472 kB' 'KernelStack: 6224 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62844 kB' 'Slab: 135340 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.087 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:47.088 node0=1024 expecting 1024 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.088 11:30:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.346 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.609 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.609 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.609 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.609 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.609 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7926024 kB' 'MemAvailable: 9527484 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 459716 kB' 'Inactive: 1474992 kB' 'Active(anon): 127732 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118880 kB' 'Mapped: 48112 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135324 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72480 kB' 'KernelStack: 6248 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.609 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.610 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7925520 kB' 'MemAvailable: 9526980 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 459308 kB' 'Inactive: 1474992 kB' 'Active(anon): 127324 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 118468 kB' 'Mapped: 47888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135328 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72484 kB' 'KernelStack: 6240 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.611 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.612 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7925520 kB' 'MemAvailable: 9526980 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 459240 kB' 'Inactive: 1474992 kB' 'Active(anon): 127256 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 118416 kB' 'Mapped: 47988 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135328 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72484 kB' 'KernelStack: 6240 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.613 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.614 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.615 nr_hugepages=1024 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:47.615 resv_hugepages=0 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:47.615 surplus_hugepages=0 00:05:47.615 anon_hugepages=0 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7925520 kB' 'MemAvailable: 9526980 kB' 'Buffers: 2436 kB' 'Cached: 1815012 kB' 'SwapCached: 0 kB' 'Active: 459476 kB' 'Inactive: 1474992 kB' 'Active(anon): 127492 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 118668 kB' 'Mapped: 47888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135324 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72480 kB' 'KernelStack: 6208 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.615 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.616 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7925268 kB' 'MemUsed: 4316704 kB' 'SwapCached: 0 kB' 'Active: 459496 kB' 'Inactive: 1474992 kB' 'Active(anon): 127512 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1817448 kB' 'Mapped: 47888 kB' 'AnonPages: 118636 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62844 kB' 'Slab: 135324 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.617 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.876 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:47.877 node0=1024 expecting 1024 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:47.877 00:05:47.877 real 0m1.443s 00:05:47.877 user 0m0.675s 00:05:47.877 sys 0m0.809s 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.877 11:30:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:47.877 ************************************ 00:05:47.877 END TEST no_shrink_alloc 00:05:47.877 ************************************ 00:05:47.877 11:30:46 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:47.877 11:30:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:47.877 11:30:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:47.877 11:30:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:47.877 11:30:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:47.877 11:30:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:47.877 11:30:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:47.877 11:30:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:47.877 11:30:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:47.877 00:05:47.877 real 0m6.377s 00:05:47.877 user 0m2.799s 00:05:47.877 sys 0m3.621s 00:05:47.877 ************************************ 00:05:47.877 END TEST hugepages 00:05:47.877 ************************************ 00:05:47.877 11:30:46 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.877 11:30:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:47.877 11:30:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:47.877 11:30:46 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.877 11:30:46 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.877 11:30:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:47.877 ************************************ 00:05:47.877 START TEST driver 00:05:47.877 ************************************ 00:05:47.877 11:30:46 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:47.877 * Looking for test storage... 00:05:47.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:47.877 11:30:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:47.877 11:30:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:47.877 11:30:46 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:54.437 11:30:52 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:54.438 11:30:52 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.438 11:30:52 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.438 11:30:52 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:54.438 ************************************ 00:05:54.438 START TEST guess_driver 00:05:54.438 ************************************ 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:54.438 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:54.438 Looking for driver=uio_pci_generic 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.438 11:30:52 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.438 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:54.438 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:54.438 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:55.006 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:55.006 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:55.006 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:55.006 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:55.006 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:55.006 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:55.006 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:55.006 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:55.006 11:30:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:55.264 11:30:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:55.264 11:30:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:55.264 11:30:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:55.264 11:30:54 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:55.264 11:30:54 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:55.264 11:30:54 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:55.264 11:30:54 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:01.822 00:06:01.822 real 0m7.236s 00:06:01.822 user 0m0.817s 00:06:01.822 sys 0m1.514s 00:06:01.822 11:31:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.822 ************************************ 00:06:01.822 END TEST guess_driver 00:06:01.822 ************************************ 00:06:01.822 11:31:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:01.822 00:06:01.822 real 0m13.309s 00:06:01.822 user 0m1.153s 00:06:01.822 sys 0m2.371s 00:06:01.822 11:31:00 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.822 11:31:00 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:01.822 ************************************ 00:06:01.822 END TEST driver 00:06:01.822 ************************************ 00:06:01.822 11:31:00 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:01.822 11:31:00 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.822 11:31:00 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.822 11:31:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:01.822 ************************************ 00:06:01.822 START TEST devices 00:06:01.822 ************************************ 00:06:01.822 11:31:00 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:01.822 * Looking for test storage... 00:06:01.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:01.822 11:31:00 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:01.822 11:31:00 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:01.822 11:31:00 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:01.822 11:31:00 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:02.388 11:31:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:02.388 11:31:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:02.388 11:31:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:02.388 No valid GPT data, bailing 00:06:02.388 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:02.388 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.388 11:31:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:02.388 11:31:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:02.388 11:31:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:02.388 11:31:01 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:02.388 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:02.388 11:31:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:02.388 11:31:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:02.647 No valid GPT data, bailing 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:02.647 11:31:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:02.647 11:31:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:02.647 11:31:01 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:06:02.647 No valid GPT data, bailing 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:06:02.647 11:31:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:06:02.647 11:31:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:06:02.647 11:31:01 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:06:02.647 No valid GPT data, bailing 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:06:02.647 11:31:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:06:02.647 11:31:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:06:02.647 11:31:01 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:02.647 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:06:02.647 11:31:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:06:02.905 No valid GPT data, bailing 00:06:02.905 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:02.905 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.905 11:31:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.905 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:06:02.905 11:31:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:06:02.905 11:31:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:06:02.905 11:31:01 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:02.905 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:02.905 11:31:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.905 11:31:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:02.905 11:31:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.905 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:06:02.905 11:31:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:06:02.905 11:31:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:06:02.905 11:31:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:06:02.905 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:06:02.905 11:31:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:06:02.906 11:31:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:06:02.906 No valid GPT data, bailing 00:06:02.906 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:02.906 11:31:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.906 11:31:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.906 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:06:02.906 11:31:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:06:02.906 11:31:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:06:02.906 11:31:01 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:06:02.906 11:31:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:06:02.906 11:31:01 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:06:02.906 11:31:01 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:02.906 11:31:01 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:02.906 11:31:01 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.906 11:31:01 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.906 11:31:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:02.906 ************************************ 00:06:02.906 START TEST nvme_mount 00:06:02.906 ************************************ 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:02.906 11:31:01 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:03.841 Creating new GPT entries in memory. 00:06:03.841 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:03.841 other utilities. 00:06:03.841 11:31:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:03.841 11:31:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:03.841 11:31:02 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:03.841 11:31:02 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:03.841 11:31:02 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:05.214 Creating new GPT entries in memory. 00:06:05.214 The operation has completed successfully. 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59728 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.214 11:31:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:05.214 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.214 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:05.214 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:05.214 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.214 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.214 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.472 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.472 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.472 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.472 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.472 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.472 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.729 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.729 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:05.987 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:05.987 11:31:04 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:06.245 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:06.245 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:06.245 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:06.245 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.245 11:31:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:06.503 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.503 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:06.503 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:06.503 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.503 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.503 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.503 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.503 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.760 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.760 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.760 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.760 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.018 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.018 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.275 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:07.275 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:07.275 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:07.275 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:07.275 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:07.276 11:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:07.534 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.534 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:07.534 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:07.534 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.534 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.534 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.792 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.792 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.792 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.792 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.792 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.792 11:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.051 11:31:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:08.051 11:31:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.310 11:31:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:08.310 11:31:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:08.310 11:31:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:08.310 11:31:07 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:08.310 11:31:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:08.310 11:31:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:08.310 11:31:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:08.310 11:31:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:08.310 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:08.310 00:06:08.310 real 0m5.468s 00:06:08.310 user 0m1.503s 00:06:08.310 sys 0m1.615s 00:06:08.310 11:31:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.310 11:31:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:08.310 ************************************ 00:06:08.310 END TEST nvme_mount 00:06:08.310 ************************************ 00:06:08.310 11:31:07 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:08.310 11:31:07 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.310 11:31:07 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.310 11:31:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:08.310 ************************************ 00:06:08.310 START TEST dm_mount 00:06:08.310 ************************************ 00:06:08.310 11:31:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:08.310 11:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:08.595 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:09.528 Creating new GPT entries in memory. 00:06:09.528 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:09.528 other utilities. 00:06:09.528 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:09.528 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:09.528 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:09.528 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:09.528 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:10.462 Creating new GPT entries in memory. 00:06:10.462 The operation has completed successfully. 00:06:10.462 11:31:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:10.462 11:31:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:10.462 11:31:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:10.462 11:31:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:10.462 11:31:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:11.397 The operation has completed successfully. 00:06:11.397 11:31:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:11.397 11:31:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:11.397 11:31:10 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60363 00:06:11.397 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:11.655 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.655 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.656 11:31:10 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.914 11:31:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:12.509 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:12.510 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:12.510 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:12.510 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:12.510 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.510 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:12.510 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:12.510 11:31:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:12.510 11:31:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:12.768 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.768 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:12.768 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:12.768 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.768 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.768 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.025 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.025 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.025 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.025 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.025 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.026 11:31:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.284 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.284 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:13.542 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:13.542 00:06:13.542 real 0m5.184s 00:06:13.542 user 0m1.007s 00:06:13.542 sys 0m1.108s 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.542 11:31:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:13.542 ************************************ 00:06:13.542 END TEST dm_mount 00:06:13.542 ************************************ 00:06:13.542 11:31:12 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:13.542 11:31:12 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:13.542 11:31:12 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.542 11:31:12 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:13.542 11:31:12 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:13.542 11:31:12 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:13.542 11:31:12 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:13.801 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:13.801 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:13.801 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:13.801 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:13.801 11:31:12 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:13.801 11:31:12 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:14.060 11:31:12 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:14.060 11:31:12 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:14.060 11:31:12 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:14.060 11:31:12 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:14.060 11:31:12 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:14.060 00:06:14.060 real 0m12.722s 00:06:14.060 user 0m3.455s 00:06:14.060 sys 0m3.552s 00:06:14.060 11:31:12 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.060 11:31:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:14.060 ************************************ 00:06:14.060 END TEST devices 00:06:14.060 ************************************ 00:06:14.060 ************************************ 00:06:14.060 END TEST setup.sh 00:06:14.060 ************************************ 00:06:14.060 00:06:14.060 real 0m45.076s 00:06:14.060 user 0m10.743s 00:06:14.060 sys 0m13.901s 00:06:14.060 11:31:12 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.060 11:31:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:14.060 11:31:12 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:14.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.884 Hugepages 00:06:14.884 node hugesize free / total 00:06:14.884 node0 1048576kB 0 / 0 00:06:14.884 node0 2048kB 2048 / 2048 00:06:14.885 00:06:14.885 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:15.142 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:15.142 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:15.142 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:15.400 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:15.400 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:15.400 11:31:14 -- spdk/autotest.sh@130 -- # uname -s 00:06:15.400 11:31:14 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:15.400 11:31:14 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:15.400 11:31:14 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.532 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.532 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.532 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.532 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.790 11:31:15 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:17.728 11:31:16 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:17.728 11:31:16 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:17.728 11:31:16 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:17.728 11:31:16 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:17.728 11:31:16 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:17.728 11:31:16 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:17.728 11:31:16 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:17.728 11:31:16 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:17.728 11:31:16 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:17.728 11:31:16 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:17.728 11:31:16 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:17.728 11:31:16 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:17.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.244 Waiting for block devices as requested 00:06:18.244 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:18.506 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:18.506 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:18.506 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:23.769 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:23.769 11:31:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:23.769 11:31:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:23.769 11:31:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:23.769 11:31:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:06:23.769 11:31:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:06:23.769 11:31:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:23.769 11:31:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:23.769 11:31:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:23.769 11:31:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1557 -- # continue 00:06:23.769 11:31:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:23.769 11:31:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:23.769 11:31:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:23.769 11:31:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:23.769 11:31:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:23.769 11:31:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:23.769 11:31:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:23.769 11:31:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:23.769 11:31:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1557 -- # continue 00:06:23.769 11:31:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:23.769 11:31:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:23.769 11:31:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:23.769 11:31:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:23.769 11:31:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1557 -- # continue 00:06:23.769 11:31:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:23.769 11:31:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:06:23.769 11:31:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:23.769 11:31:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:23.769 11:31:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:06:23.769 11:31:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:06:23.769 11:31:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:23.769 11:31:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:23.769 11:31:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:23.769 11:31:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:23.769 11:31:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:23.769 11:31:22 -- common/autotest_common.sh@1557 -- # continue 00:06:23.769 11:31:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:23.769 11:31:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.769 11:31:22 -- common/autotest_common.sh@10 -- # set +x 00:06:23.769 11:31:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:23.769 11:31:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.769 11:31:22 -- common/autotest_common.sh@10 -- # set +x 00:06:23.769 11:31:22 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:24.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:24.900 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.900 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.900 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.900 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.900 11:31:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:24.900 11:31:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.900 11:31:23 -- common/autotest_common.sh@10 -- # set +x 00:06:25.160 11:31:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:25.160 11:31:23 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:25.160 11:31:23 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:25.160 11:31:23 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:25.160 11:31:23 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:25.160 11:31:23 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:25.160 11:31:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:25.160 11:31:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:25.160 11:31:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:25.160 11:31:23 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:25.160 11:31:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:25.160 11:31:24 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:25.160 11:31:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:25.160 11:31:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:25.160 11:31:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:25.160 11:31:24 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:25.160 11:31:24 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:25.160 11:31:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:25.160 11:31:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:25.160 11:31:24 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:25.160 11:31:24 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:25.160 11:31:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:25.160 11:31:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:25.160 11:31:24 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:25.160 11:31:24 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:25.160 11:31:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:25.160 11:31:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:25.160 11:31:24 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:25.160 11:31:24 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:25.160 11:31:24 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:25.160 11:31:24 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:25.160 11:31:24 -- common/autotest_common.sh@1593 -- # return 0 00:06:25.160 11:31:24 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:25.160 11:31:24 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:25.160 11:31:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:25.160 11:31:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:25.160 11:31:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:25.160 11:31:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.160 11:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:25.160 11:31:24 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:25.160 11:31:24 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:25.160 11:31:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.160 11:31:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.160 11:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:25.160 ************************************ 00:06:25.160 START TEST env 00:06:25.160 ************************************ 00:06:25.160 11:31:24 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:25.160 * Looking for test storage... 00:06:25.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:25.160 11:31:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:25.160 11:31:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.160 11:31:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.160 11:31:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.160 ************************************ 00:06:25.160 START TEST env_memory 00:06:25.160 ************************************ 00:06:25.160 11:31:24 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:25.160 00:06:25.160 00:06:25.160 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.160 http://cunit.sourceforge.net/ 00:06:25.160 00:06:25.160 00:06:25.160 Suite: memory 00:06:25.421 Test: alloc and free memory map ...[2024-07-25 11:31:24.247385] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:25.421 passed 00:06:25.421 Test: mem map translation ...[2024-07-25 11:31:24.316037] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:25.421 [2024-07-25 11:31:24.316156] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:25.421 [2024-07-25 11:31:24.316266] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:25.421 [2024-07-25 11:31:24.316296] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:25.421 passed 00:06:25.421 Test: mem map registration ...[2024-07-25 11:31:24.416659] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:25.421 [2024-07-25 11:31:24.416773] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:25.421 passed 00:06:25.679 Test: mem map adjacent registrations ...passed 00:06:25.679 00:06:25.679 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.679 suites 1 1 n/a 0 0 00:06:25.679 tests 4 4 4 0 0 00:06:25.679 asserts 152 152 152 0 n/a 00:06:25.679 00:06:25.679 Elapsed time = 0.353 seconds 00:06:25.679 00:06:25.679 real 0m0.397s 00:06:25.679 user 0m0.359s 00:06:25.679 sys 0m0.028s 00:06:25.679 11:31:24 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.679 11:31:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:25.679 ************************************ 00:06:25.679 END TEST env_memory 00:06:25.679 ************************************ 00:06:25.679 11:31:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:25.679 11:31:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.679 11:31:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.679 11:31:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.679 ************************************ 00:06:25.679 START TEST env_vtophys 00:06:25.679 ************************************ 00:06:25.679 11:31:24 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:25.679 EAL: lib.eal log level changed from notice to debug 00:06:25.679 EAL: Detected lcore 0 as core 0 on socket 0 00:06:25.679 EAL: Detected lcore 1 as core 0 on socket 0 00:06:25.679 EAL: Detected lcore 2 as core 0 on socket 0 00:06:25.679 EAL: Detected lcore 3 as core 0 on socket 0 00:06:25.679 EAL: Detected lcore 4 as core 0 on socket 0 00:06:25.679 EAL: Detected lcore 5 as core 0 on socket 0 00:06:25.679 EAL: Detected lcore 6 as core 0 on socket 0 00:06:25.679 EAL: Detected lcore 7 as core 0 on socket 0 00:06:25.679 EAL: Detected lcore 8 as core 0 on socket 0 00:06:25.679 EAL: Detected lcore 9 as core 0 on socket 0 00:06:25.679 EAL: Maximum logical cores by configuration: 128 00:06:25.679 EAL: Detected CPU lcores: 10 00:06:25.679 EAL: Detected NUMA nodes: 1 00:06:25.679 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:25.679 EAL: Detected shared linkage of DPDK 00:06:25.679 EAL: No shared files mode enabled, IPC will be disabled 00:06:25.679 EAL: Selected IOVA mode 'PA' 00:06:25.679 EAL: Probing VFIO support... 00:06:25.679 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:25.679 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:25.679 EAL: Ask a virtual area of 0x2e000 bytes 00:06:25.679 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:25.679 EAL: Setting up physically contiguous memory... 00:06:25.680 EAL: Setting maximum number of open files to 524288 00:06:25.680 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:25.680 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:25.680 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.680 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:25.680 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.680 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.680 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:25.680 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:25.680 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.680 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:25.680 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.680 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.680 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:25.680 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:25.680 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.680 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:25.680 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.680 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.680 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:25.680 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:25.680 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.680 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:25.680 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.680 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.680 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:25.680 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:25.680 EAL: Hugepages will be freed exactly as allocated. 00:06:25.680 EAL: No shared files mode enabled, IPC is disabled 00:06:25.680 EAL: No shared files mode enabled, IPC is disabled 00:06:25.937 EAL: TSC frequency is ~2200000 KHz 00:06:25.937 EAL: Main lcore 0 is ready (tid=7f32ae411a40;cpuset=[0]) 00:06:25.937 EAL: Trying to obtain current memory policy. 00:06:25.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.938 EAL: Restoring previous memory policy: 0 00:06:25.938 EAL: request: mp_malloc_sync 00:06:25.938 EAL: No shared files mode enabled, IPC is disabled 00:06:25.938 EAL: Heap on socket 0 was expanded by 2MB 00:06:25.938 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:25.938 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:25.938 EAL: Mem event callback 'spdk:(nil)' registered 00:06:25.938 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:25.938 00:06:25.938 00:06:25.938 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.938 http://cunit.sourceforge.net/ 00:06:25.938 00:06:25.938 00:06:25.938 Suite: components_suite 00:06:26.504 Test: vtophys_malloc_test ...passed 00:06:26.504 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:26.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.504 EAL: Restoring previous memory policy: 4 00:06:26.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.504 EAL: request: mp_malloc_sync 00:06:26.504 EAL: No shared files mode enabled, IPC is disabled 00:06:26.504 EAL: Heap on socket 0 was expanded by 4MB 00:06:26.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.504 EAL: request: mp_malloc_sync 00:06:26.504 EAL: No shared files mode enabled, IPC is disabled 00:06:26.504 EAL: Heap on socket 0 was shrunk by 4MB 00:06:26.504 EAL: Trying to obtain current memory policy. 00:06:26.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.504 EAL: Restoring previous memory policy: 4 00:06:26.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.504 EAL: request: mp_malloc_sync 00:06:26.504 EAL: No shared files mode enabled, IPC is disabled 00:06:26.504 EAL: Heap on socket 0 was expanded by 6MB 00:06:26.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.504 EAL: request: mp_malloc_sync 00:06:26.504 EAL: No shared files mode enabled, IPC is disabled 00:06:26.504 EAL: Heap on socket 0 was shrunk by 6MB 00:06:26.504 EAL: Trying to obtain current memory policy. 00:06:26.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.504 EAL: Restoring previous memory policy: 4 00:06:26.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.504 EAL: request: mp_malloc_sync 00:06:26.504 EAL: No shared files mode enabled, IPC is disabled 00:06:26.504 EAL: Heap on socket 0 was expanded by 10MB 00:06:26.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.504 EAL: request: mp_malloc_sync 00:06:26.504 EAL: No shared files mode enabled, IPC is disabled 00:06:26.504 EAL: Heap on socket 0 was shrunk by 10MB 00:06:26.504 EAL: Trying to obtain current memory policy. 00:06:26.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.504 EAL: Restoring previous memory policy: 4 00:06:26.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.504 EAL: request: mp_malloc_sync 00:06:26.504 EAL: No shared files mode enabled, IPC is disabled 00:06:26.504 EAL: Heap on socket 0 was expanded by 18MB 00:06:26.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.504 EAL: request: mp_malloc_sync 00:06:26.504 EAL: No shared files mode enabled, IPC is disabled 00:06:26.504 EAL: Heap on socket 0 was shrunk by 18MB 00:06:26.504 EAL: Trying to obtain current memory policy. 00:06:26.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.504 EAL: Restoring previous memory policy: 4 00:06:26.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.504 EAL: request: mp_malloc_sync 00:06:26.504 EAL: No shared files mode enabled, IPC is disabled 00:06:26.504 EAL: Heap on socket 0 was expanded by 34MB 00:06:26.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.504 EAL: request: mp_malloc_sync 00:06:26.504 EAL: No shared files mode enabled, IPC is disabled 00:06:26.504 EAL: Heap on socket 0 was shrunk by 34MB 00:06:26.761 EAL: Trying to obtain current memory policy. 00:06:26.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.761 EAL: Restoring previous memory policy: 4 00:06:26.761 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.761 EAL: request: mp_malloc_sync 00:06:26.761 EAL: No shared files mode enabled, IPC is disabled 00:06:26.761 EAL: Heap on socket 0 was expanded by 66MB 00:06:26.761 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.761 EAL: request: mp_malloc_sync 00:06:26.761 EAL: No shared files mode enabled, IPC is disabled 00:06:26.761 EAL: Heap on socket 0 was shrunk by 66MB 00:06:27.019 EAL: Trying to obtain current memory policy. 00:06:27.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.019 EAL: Restoring previous memory policy: 4 00:06:27.019 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.019 EAL: request: mp_malloc_sync 00:06:27.019 EAL: No shared files mode enabled, IPC is disabled 00:06:27.019 EAL: Heap on socket 0 was expanded by 130MB 00:06:27.019 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.278 EAL: request: mp_malloc_sync 00:06:27.278 EAL: No shared files mode enabled, IPC is disabled 00:06:27.278 EAL: Heap on socket 0 was shrunk by 130MB 00:06:27.278 EAL: Trying to obtain current memory policy. 00:06:27.278 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.536 EAL: Restoring previous memory policy: 4 00:06:27.536 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.536 EAL: request: mp_malloc_sync 00:06:27.536 EAL: No shared files mode enabled, IPC is disabled 00:06:27.536 EAL: Heap on socket 0 was expanded by 258MB 00:06:27.793 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.793 EAL: request: mp_malloc_sync 00:06:27.793 EAL: No shared files mode enabled, IPC is disabled 00:06:27.793 EAL: Heap on socket 0 was shrunk by 258MB 00:06:28.359 EAL: Trying to obtain current memory policy. 00:06:28.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:28.359 EAL: Restoring previous memory policy: 4 00:06:28.359 EAL: Calling mem event callback 'spdk:(nil)' 00:06:28.359 EAL: request: mp_malloc_sync 00:06:28.359 EAL: No shared files mode enabled, IPC is disabled 00:06:28.359 EAL: Heap on socket 0 was expanded by 514MB 00:06:29.293 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.293 EAL: request: mp_malloc_sync 00:06:29.293 EAL: No shared files mode enabled, IPC is disabled 00:06:29.293 EAL: Heap on socket 0 was shrunk by 514MB 00:06:30.227 EAL: Trying to obtain current memory policy. 00:06:30.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.486 EAL: Restoring previous memory policy: 4 00:06:30.486 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.486 EAL: request: mp_malloc_sync 00:06:30.486 EAL: No shared files mode enabled, IPC is disabled 00:06:30.486 EAL: Heap on socket 0 was expanded by 1026MB 00:06:32.384 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.384 EAL: request: mp_malloc_sync 00:06:32.384 EAL: No shared files mode enabled, IPC is disabled 00:06:32.384 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:33.759 passed 00:06:33.759 00:06:33.759 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.759 suites 1 1 n/a 0 0 00:06:33.759 tests 2 2 2 0 0 00:06:33.759 asserts 5369 5369 5369 0 n/a 00:06:33.759 00:06:33.759 Elapsed time = 7.753 seconds 00:06:33.759 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.759 EAL: request: mp_malloc_sync 00:06:33.759 EAL: No shared files mode enabled, IPC is disabled 00:06:33.759 EAL: Heap on socket 0 was shrunk by 2MB 00:06:33.759 EAL: No shared files mode enabled, IPC is disabled 00:06:33.759 EAL: No shared files mode enabled, IPC is disabled 00:06:33.759 EAL: No shared files mode enabled, IPC is disabled 00:06:33.759 00:06:33.759 real 0m8.089s 00:06:33.759 user 0m6.817s 00:06:33.759 sys 0m1.099s 00:06:33.759 11:31:32 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.759 ************************************ 00:06:33.759 END TEST env_vtophys 00:06:33.759 ************************************ 00:06:33.759 11:31:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:33.759 11:31:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:33.759 11:31:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.759 11:31:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.759 11:31:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:33.759 ************************************ 00:06:33.759 START TEST env_pci 00:06:33.759 ************************************ 00:06:33.759 11:31:32 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:33.759 00:06:33.759 00:06:33.759 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.759 http://cunit.sourceforge.net/ 00:06:33.759 00:06:33.759 00:06:33.759 Suite: pci 00:06:33.759 Test: pci_hook ...[2024-07-25 11:31:32.800196] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 62202 has claimed it 00:06:34.017 passed 00:06:34.017 00:06:34.017 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.017 suites 1 1 n/a 0 0 00:06:34.017 tests 1 1 1 0 0 00:06:34.017 asserts 25 25 25 0 n/a 00:06:34.017 00:06:34.017 Elapsed time = 0.012 seconds 00:06:34.017 EAL: Cannot find device (10000:00:01.0) 00:06:34.017 EAL: Failed to attach device on primary process 00:06:34.017 00:06:34.017 real 0m0.098s 00:06:34.017 user 0m0.043s 00:06:34.017 sys 0m0.054s 00:06:34.017 11:31:32 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.017 11:31:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:34.017 ************************************ 00:06:34.017 END TEST env_pci 00:06:34.017 ************************************ 00:06:34.017 11:31:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:34.017 11:31:32 env -- env/env.sh@15 -- # uname 00:06:34.017 11:31:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:34.017 11:31:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:34.017 11:31:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:34.017 11:31:32 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:34.017 11:31:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.017 11:31:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.017 ************************************ 00:06:34.017 START TEST env_dpdk_post_init 00:06:34.017 ************************************ 00:06:34.017 11:31:32 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:34.017 EAL: Detected CPU lcores: 10 00:06:34.017 EAL: Detected NUMA nodes: 1 00:06:34.017 EAL: Detected shared linkage of DPDK 00:06:34.017 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:34.017 EAL: Selected IOVA mode 'PA' 00:06:34.274 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:34.274 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:34.274 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:34.274 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:34.274 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:34.274 Starting DPDK initialization... 00:06:34.274 Starting SPDK post initialization... 00:06:34.274 SPDK NVMe probe 00:06:34.274 Attaching to 0000:00:10.0 00:06:34.274 Attaching to 0000:00:11.0 00:06:34.274 Attaching to 0000:00:12.0 00:06:34.274 Attaching to 0000:00:13.0 00:06:34.274 Attached to 0000:00:10.0 00:06:34.274 Attached to 0000:00:11.0 00:06:34.274 Attached to 0000:00:13.0 00:06:34.274 Attached to 0000:00:12.0 00:06:34.274 Cleaning up... 00:06:34.274 00:06:34.274 real 0m0.298s 00:06:34.274 user 0m0.087s 00:06:34.274 sys 0m0.113s 00:06:34.274 11:31:33 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.274 ************************************ 00:06:34.274 END TEST env_dpdk_post_init 00:06:34.274 11:31:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:34.274 ************************************ 00:06:34.274 11:31:33 env -- env/env.sh@26 -- # uname 00:06:34.274 11:31:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:34.274 11:31:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:34.274 11:31:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.274 11:31:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.274 11:31:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.274 ************************************ 00:06:34.274 START TEST env_mem_callbacks 00:06:34.274 ************************************ 00:06:34.274 11:31:33 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:34.274 EAL: Detected CPU lcores: 10 00:06:34.274 EAL: Detected NUMA nodes: 1 00:06:34.274 EAL: Detected shared linkage of DPDK 00:06:34.531 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:34.531 EAL: Selected IOVA mode 'PA' 00:06:34.531 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:34.531 00:06:34.531 00:06:34.531 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.531 http://cunit.sourceforge.net/ 00:06:34.531 00:06:34.531 00:06:34.531 Suite: memory 00:06:34.532 Test: test ... 00:06:34.532 register 0x200000200000 2097152 00:06:34.532 malloc 3145728 00:06:34.532 register 0x200000400000 4194304 00:06:34.532 buf 0x2000004fffc0 len 3145728 PASSED 00:06:34.532 malloc 64 00:06:34.532 buf 0x2000004ffec0 len 64 PASSED 00:06:34.532 malloc 4194304 00:06:34.532 register 0x200000800000 6291456 00:06:34.532 buf 0x2000009fffc0 len 4194304 PASSED 00:06:34.532 free 0x2000004fffc0 3145728 00:06:34.532 free 0x2000004ffec0 64 00:06:34.532 unregister 0x200000400000 4194304 PASSED 00:06:34.532 free 0x2000009fffc0 4194304 00:06:34.532 unregister 0x200000800000 6291456 PASSED 00:06:34.532 malloc 8388608 00:06:34.532 register 0x200000400000 10485760 00:06:34.532 buf 0x2000005fffc0 len 8388608 PASSED 00:06:34.532 free 0x2000005fffc0 8388608 00:06:34.532 unregister 0x200000400000 10485760 PASSED 00:06:34.532 passed 00:06:34.532 00:06:34.532 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.532 suites 1 1 n/a 0 0 00:06:34.532 tests 1 1 1 0 0 00:06:34.532 asserts 15 15 15 0 n/a 00:06:34.532 00:06:34.532 Elapsed time = 0.060 seconds 00:06:34.532 ************************************ 00:06:34.532 END TEST env_mem_callbacks 00:06:34.532 ************************************ 00:06:34.532 00:06:34.532 real 0m0.276s 00:06:34.532 user 0m0.100s 00:06:34.532 sys 0m0.071s 00:06:34.532 11:31:33 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.532 11:31:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:34.789 ************************************ 00:06:34.789 END TEST env 00:06:34.789 ************************************ 00:06:34.789 00:06:34.789 real 0m9.496s 00:06:34.789 user 0m7.527s 00:06:34.789 sys 0m1.568s 00:06:34.789 11:31:33 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.789 11:31:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.789 11:31:33 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:34.789 11:31:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.789 11:31:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.789 11:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:34.789 ************************************ 00:06:34.789 START TEST rpc 00:06:34.789 ************************************ 00:06:34.789 11:31:33 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:34.789 * Looking for test storage... 00:06:34.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.789 11:31:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62321 00:06:34.789 11:31:33 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:34.789 11:31:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.789 11:31:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62321 00:06:34.789 11:31:33 rpc -- common/autotest_common.sh@831 -- # '[' -z 62321 ']' 00:06:34.789 11:31:33 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.789 11:31:33 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.789 11:31:33 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.789 11:31:33 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.789 11:31:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.047 [2024-07-25 11:31:33.856682] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:35.047 [2024-07-25 11:31:33.857477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62321 ] 00:06:35.047 [2024-07-25 11:31:34.045124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.306 [2024-07-25 11:31:34.297763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:35.306 [2024-07-25 11:31:34.297843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62321' to capture a snapshot of events at runtime. 00:06:35.306 [2024-07-25 11:31:34.297866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.306 [2024-07-25 11:31:34.297881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.306 [2024-07-25 11:31:34.297896] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62321 for offline analysis/debug. 00:06:35.306 [2024-07-25 11:31:34.297959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.241 11:31:35 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.241 11:31:35 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:36.241 11:31:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:36.241 11:31:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:36.241 11:31:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:36.241 11:31:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:36.241 11:31:35 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.241 11:31:35 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.241 11:31:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.241 ************************************ 00:06:36.241 START TEST rpc_integrity 00:06:36.241 ************************************ 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:36.241 { 00:06:36.241 "name": "Malloc0", 00:06:36.241 "aliases": [ 00:06:36.241 "b95731ff-8074-42b1-b1f8-f9d3f40d06bc" 00:06:36.241 ], 00:06:36.241 "product_name": "Malloc disk", 00:06:36.241 "block_size": 512, 00:06:36.241 "num_blocks": 16384, 00:06:36.241 "uuid": "b95731ff-8074-42b1-b1f8-f9d3f40d06bc", 00:06:36.241 "assigned_rate_limits": { 00:06:36.241 "rw_ios_per_sec": 0, 00:06:36.241 "rw_mbytes_per_sec": 0, 00:06:36.241 "r_mbytes_per_sec": 0, 00:06:36.241 "w_mbytes_per_sec": 0 00:06:36.241 }, 00:06:36.241 "claimed": false, 00:06:36.241 "zoned": false, 00:06:36.241 "supported_io_types": { 00:06:36.241 "read": true, 00:06:36.241 "write": true, 00:06:36.241 "unmap": true, 00:06:36.241 "flush": true, 00:06:36.241 "reset": true, 00:06:36.241 "nvme_admin": false, 00:06:36.241 "nvme_io": false, 00:06:36.241 "nvme_io_md": false, 00:06:36.241 "write_zeroes": true, 00:06:36.241 "zcopy": true, 00:06:36.241 "get_zone_info": false, 00:06:36.241 "zone_management": false, 00:06:36.241 "zone_append": false, 00:06:36.241 "compare": false, 00:06:36.241 "compare_and_write": false, 00:06:36.241 "abort": true, 00:06:36.241 "seek_hole": false, 00:06:36.241 "seek_data": false, 00:06:36.241 "copy": true, 00:06:36.241 "nvme_iov_md": false 00:06:36.241 }, 00:06:36.241 "memory_domains": [ 00:06:36.241 { 00:06:36.241 "dma_device_id": "system", 00:06:36.241 "dma_device_type": 1 00:06:36.241 }, 00:06:36.241 { 00:06:36.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.241 "dma_device_type": 2 00:06:36.241 } 00:06:36.241 ], 00:06:36.241 "driver_specific": {} 00:06:36.241 } 00:06:36.241 ]' 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.241 [2024-07-25 11:31:35.279155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:36.241 [2024-07-25 11:31:35.279247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.241 [2024-07-25 11:31:35.279294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:36.241 [2024-07-25 11:31:35.279310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.241 [2024-07-25 11:31:35.282362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.241 [2024-07-25 11:31:35.282408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:36.241 Passthru0 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.241 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.241 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.500 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:36.500 { 00:06:36.500 "name": "Malloc0", 00:06:36.500 "aliases": [ 00:06:36.500 "b95731ff-8074-42b1-b1f8-f9d3f40d06bc" 00:06:36.500 ], 00:06:36.500 "product_name": "Malloc disk", 00:06:36.500 "block_size": 512, 00:06:36.500 "num_blocks": 16384, 00:06:36.500 "uuid": "b95731ff-8074-42b1-b1f8-f9d3f40d06bc", 00:06:36.500 "assigned_rate_limits": { 00:06:36.500 "rw_ios_per_sec": 0, 00:06:36.500 "rw_mbytes_per_sec": 0, 00:06:36.500 "r_mbytes_per_sec": 0, 00:06:36.500 "w_mbytes_per_sec": 0 00:06:36.500 }, 00:06:36.500 "claimed": true, 00:06:36.500 "claim_type": "exclusive_write", 00:06:36.500 "zoned": false, 00:06:36.500 "supported_io_types": { 00:06:36.500 "read": true, 00:06:36.500 "write": true, 00:06:36.500 "unmap": true, 00:06:36.500 "flush": true, 00:06:36.500 "reset": true, 00:06:36.500 "nvme_admin": false, 00:06:36.500 "nvme_io": false, 00:06:36.500 "nvme_io_md": false, 00:06:36.500 "write_zeroes": true, 00:06:36.500 "zcopy": true, 00:06:36.500 "get_zone_info": false, 00:06:36.500 "zone_management": false, 00:06:36.500 "zone_append": false, 00:06:36.500 "compare": false, 00:06:36.500 "compare_and_write": false, 00:06:36.500 "abort": true, 00:06:36.500 "seek_hole": false, 00:06:36.500 "seek_data": false, 00:06:36.500 "copy": true, 00:06:36.500 "nvme_iov_md": false 00:06:36.500 }, 00:06:36.500 "memory_domains": [ 00:06:36.500 { 00:06:36.500 "dma_device_id": "system", 00:06:36.500 "dma_device_type": 1 00:06:36.500 }, 00:06:36.500 { 00:06:36.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.500 "dma_device_type": 2 00:06:36.500 } 00:06:36.500 ], 00:06:36.500 "driver_specific": {} 00:06:36.500 }, 00:06:36.500 { 00:06:36.500 "name": "Passthru0", 00:06:36.500 "aliases": [ 00:06:36.500 "d1cc8708-a455-52fd-87b5-793d21c7f5c4" 00:06:36.500 ], 00:06:36.500 "product_name": "passthru", 00:06:36.500 "block_size": 512, 00:06:36.500 "num_blocks": 16384, 00:06:36.500 "uuid": "d1cc8708-a455-52fd-87b5-793d21c7f5c4", 00:06:36.500 "assigned_rate_limits": { 00:06:36.500 "rw_ios_per_sec": 0, 00:06:36.500 "rw_mbytes_per_sec": 0, 00:06:36.500 "r_mbytes_per_sec": 0, 00:06:36.500 "w_mbytes_per_sec": 0 00:06:36.500 }, 00:06:36.500 "claimed": false, 00:06:36.500 "zoned": false, 00:06:36.500 "supported_io_types": { 00:06:36.500 "read": true, 00:06:36.500 "write": true, 00:06:36.500 "unmap": true, 00:06:36.500 "flush": true, 00:06:36.500 "reset": true, 00:06:36.500 "nvme_admin": false, 00:06:36.500 "nvme_io": false, 00:06:36.500 "nvme_io_md": false, 00:06:36.500 "write_zeroes": true, 00:06:36.500 "zcopy": true, 00:06:36.500 "get_zone_info": false, 00:06:36.500 "zone_management": false, 00:06:36.500 "zone_append": false, 00:06:36.500 "compare": false, 00:06:36.500 "compare_and_write": false, 00:06:36.500 "abort": true, 00:06:36.500 "seek_hole": false, 00:06:36.500 "seek_data": false, 00:06:36.500 "copy": true, 00:06:36.500 "nvme_iov_md": false 00:06:36.500 }, 00:06:36.500 "memory_domains": [ 00:06:36.500 { 00:06:36.500 "dma_device_id": "system", 00:06:36.500 "dma_device_type": 1 00:06:36.500 }, 00:06:36.500 { 00:06:36.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.500 "dma_device_type": 2 00:06:36.500 } 00:06:36.500 ], 00:06:36.500 "driver_specific": { 00:06:36.500 "passthru": { 00:06:36.500 "name": "Passthru0", 00:06:36.500 "base_bdev_name": "Malloc0" 00:06:36.500 } 00:06:36.500 } 00:06:36.500 } 00:06:36.500 ]' 00:06:36.500 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:36.500 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:36.500 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.500 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.500 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.500 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:36.500 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:36.500 11:31:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:36.500 00:06:36.500 real 0m0.335s 00:06:36.500 user 0m0.204s 00:06:36.500 sys 0m0.036s 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.500 11:31:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.500 ************************************ 00:06:36.500 END TEST rpc_integrity 00:06:36.500 ************************************ 00:06:36.500 11:31:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:36.500 11:31:35 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.500 11:31:35 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.500 11:31:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.500 ************************************ 00:06:36.500 START TEST rpc_plugins 00:06:36.500 ************************************ 00:06:36.500 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:36.500 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:36.500 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.501 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.501 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.501 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:36.501 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:36.501 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.501 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.501 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.501 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:36.501 { 00:06:36.501 "name": "Malloc1", 00:06:36.501 "aliases": [ 00:06:36.501 "eb8afd01-a5d3-44ff-be43-521ee3de4def" 00:06:36.501 ], 00:06:36.501 "product_name": "Malloc disk", 00:06:36.501 "block_size": 4096, 00:06:36.501 "num_blocks": 256, 00:06:36.501 "uuid": "eb8afd01-a5d3-44ff-be43-521ee3de4def", 00:06:36.501 "assigned_rate_limits": { 00:06:36.501 "rw_ios_per_sec": 0, 00:06:36.501 "rw_mbytes_per_sec": 0, 00:06:36.501 "r_mbytes_per_sec": 0, 00:06:36.501 "w_mbytes_per_sec": 0 00:06:36.501 }, 00:06:36.501 "claimed": false, 00:06:36.501 "zoned": false, 00:06:36.501 "supported_io_types": { 00:06:36.501 "read": true, 00:06:36.501 "write": true, 00:06:36.501 "unmap": true, 00:06:36.501 "flush": true, 00:06:36.501 "reset": true, 00:06:36.501 "nvme_admin": false, 00:06:36.501 "nvme_io": false, 00:06:36.501 "nvme_io_md": false, 00:06:36.501 "write_zeroes": true, 00:06:36.501 "zcopy": true, 00:06:36.501 "get_zone_info": false, 00:06:36.501 "zone_management": false, 00:06:36.501 "zone_append": false, 00:06:36.501 "compare": false, 00:06:36.501 "compare_and_write": false, 00:06:36.501 "abort": true, 00:06:36.501 "seek_hole": false, 00:06:36.501 "seek_data": false, 00:06:36.501 "copy": true, 00:06:36.501 "nvme_iov_md": false 00:06:36.501 }, 00:06:36.501 "memory_domains": [ 00:06:36.501 { 00:06:36.501 "dma_device_id": "system", 00:06:36.501 "dma_device_type": 1 00:06:36.501 }, 00:06:36.501 { 00:06:36.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.501 "dma_device_type": 2 00:06:36.501 } 00:06:36.501 ], 00:06:36.501 "driver_specific": {} 00:06:36.501 } 00:06:36.501 ]' 00:06:36.501 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:36.759 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:36.759 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:36.759 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.759 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.759 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.759 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:36.759 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.759 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.759 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.759 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:36.759 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:36.759 11:31:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:36.759 00:06:36.759 real 0m0.164s 00:06:36.759 user 0m0.098s 00:06:36.759 sys 0m0.021s 00:06:36.759 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.759 11:31:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.759 ************************************ 00:06:36.759 END TEST rpc_plugins 00:06:36.759 ************************************ 00:06:36.759 11:31:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:36.759 11:31:35 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.759 11:31:35 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.759 11:31:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.759 ************************************ 00:06:36.759 START TEST rpc_trace_cmd_test 00:06:36.759 ************************************ 00:06:36.759 11:31:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:36.759 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:36.759 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:36.759 11:31:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.759 11:31:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.759 11:31:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.759 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:36.759 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62321", 00:06:36.759 "tpoint_group_mask": "0x8", 00:06:36.759 "iscsi_conn": { 00:06:36.759 "mask": "0x2", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "scsi": { 00:06:36.759 "mask": "0x4", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "bdev": { 00:06:36.759 "mask": "0x8", 00:06:36.759 "tpoint_mask": "0xffffffffffffffff" 00:06:36.759 }, 00:06:36.759 "nvmf_rdma": { 00:06:36.759 "mask": "0x10", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "nvmf_tcp": { 00:06:36.759 "mask": "0x20", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "ftl": { 00:06:36.759 "mask": "0x40", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "blobfs": { 00:06:36.759 "mask": "0x80", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "dsa": { 00:06:36.759 "mask": "0x200", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "thread": { 00:06:36.759 "mask": "0x400", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "nvme_pcie": { 00:06:36.759 "mask": "0x800", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "iaa": { 00:06:36.759 "mask": "0x1000", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "nvme_tcp": { 00:06:36.759 "mask": "0x2000", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "bdev_nvme": { 00:06:36.759 "mask": "0x4000", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 }, 00:06:36.759 "sock": { 00:06:36.759 "mask": "0x8000", 00:06:36.759 "tpoint_mask": "0x0" 00:06:36.759 } 00:06:36.759 }' 00:06:36.759 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:36.759 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:36.759 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:37.016 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:37.016 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:37.016 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:37.016 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:37.016 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:37.016 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:37.016 11:31:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:37.016 00:06:37.016 real 0m0.255s 00:06:37.016 user 0m0.221s 00:06:37.016 sys 0m0.022s 00:06:37.016 11:31:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.016 ************************************ 00:06:37.016 END TEST rpc_trace_cmd_test 00:06:37.016 ************************************ 00:06:37.016 11:31:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.016 11:31:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:37.016 11:31:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:37.016 11:31:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:37.016 11:31:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.016 11:31:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.016 11:31:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.016 ************************************ 00:06:37.016 START TEST rpc_daemon_integrity 00:06:37.016 ************************************ 00:06:37.016 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:37.016 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:37.016 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.016 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.016 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.016 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:37.016 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:37.274 { 00:06:37.274 "name": "Malloc2", 00:06:37.274 "aliases": [ 00:06:37.274 "eb6827c9-09c5-460b-8baf-59ede1e2e9a1" 00:06:37.274 ], 00:06:37.274 "product_name": "Malloc disk", 00:06:37.274 "block_size": 512, 00:06:37.274 "num_blocks": 16384, 00:06:37.274 "uuid": "eb6827c9-09c5-460b-8baf-59ede1e2e9a1", 00:06:37.274 "assigned_rate_limits": { 00:06:37.274 "rw_ios_per_sec": 0, 00:06:37.274 "rw_mbytes_per_sec": 0, 00:06:37.274 "r_mbytes_per_sec": 0, 00:06:37.274 "w_mbytes_per_sec": 0 00:06:37.274 }, 00:06:37.274 "claimed": false, 00:06:37.274 "zoned": false, 00:06:37.274 "supported_io_types": { 00:06:37.274 "read": true, 00:06:37.274 "write": true, 00:06:37.274 "unmap": true, 00:06:37.274 "flush": true, 00:06:37.274 "reset": true, 00:06:37.274 "nvme_admin": false, 00:06:37.274 "nvme_io": false, 00:06:37.274 "nvme_io_md": false, 00:06:37.274 "write_zeroes": true, 00:06:37.274 "zcopy": true, 00:06:37.274 "get_zone_info": false, 00:06:37.274 "zone_management": false, 00:06:37.274 "zone_append": false, 00:06:37.274 "compare": false, 00:06:37.274 "compare_and_write": false, 00:06:37.274 "abort": true, 00:06:37.274 "seek_hole": false, 00:06:37.274 "seek_data": false, 00:06:37.274 "copy": true, 00:06:37.274 "nvme_iov_md": false 00:06:37.274 }, 00:06:37.274 "memory_domains": [ 00:06:37.274 { 00:06:37.274 "dma_device_id": "system", 00:06:37.274 "dma_device_type": 1 00:06:37.274 }, 00:06:37.274 { 00:06:37.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.274 "dma_device_type": 2 00:06:37.274 } 00:06:37.274 ], 00:06:37.274 "driver_specific": {} 00:06:37.274 } 00:06:37.274 ]' 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.274 [2024-07-25 11:31:36.173147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:37.274 [2024-07-25 11:31:36.173241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.274 [2024-07-25 11:31:36.173283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:37.274 [2024-07-25 11:31:36.173299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.274 [2024-07-25 11:31:36.176377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.274 [2024-07-25 11:31:36.176423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:37.274 Passthru0 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.274 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:37.274 { 00:06:37.274 "name": "Malloc2", 00:06:37.274 "aliases": [ 00:06:37.274 "eb6827c9-09c5-460b-8baf-59ede1e2e9a1" 00:06:37.274 ], 00:06:37.274 "product_name": "Malloc disk", 00:06:37.274 "block_size": 512, 00:06:37.274 "num_blocks": 16384, 00:06:37.274 "uuid": "eb6827c9-09c5-460b-8baf-59ede1e2e9a1", 00:06:37.274 "assigned_rate_limits": { 00:06:37.274 "rw_ios_per_sec": 0, 00:06:37.274 "rw_mbytes_per_sec": 0, 00:06:37.274 "r_mbytes_per_sec": 0, 00:06:37.274 "w_mbytes_per_sec": 0 00:06:37.274 }, 00:06:37.274 "claimed": true, 00:06:37.274 "claim_type": "exclusive_write", 00:06:37.274 "zoned": false, 00:06:37.274 "supported_io_types": { 00:06:37.274 "read": true, 00:06:37.274 "write": true, 00:06:37.274 "unmap": true, 00:06:37.274 "flush": true, 00:06:37.274 "reset": true, 00:06:37.274 "nvme_admin": false, 00:06:37.274 "nvme_io": false, 00:06:37.274 "nvme_io_md": false, 00:06:37.274 "write_zeroes": true, 00:06:37.274 "zcopy": true, 00:06:37.274 "get_zone_info": false, 00:06:37.274 "zone_management": false, 00:06:37.274 "zone_append": false, 00:06:37.274 "compare": false, 00:06:37.274 "compare_and_write": false, 00:06:37.274 "abort": true, 00:06:37.274 "seek_hole": false, 00:06:37.274 "seek_data": false, 00:06:37.274 "copy": true, 00:06:37.274 "nvme_iov_md": false 00:06:37.274 }, 00:06:37.274 "memory_domains": [ 00:06:37.274 { 00:06:37.274 "dma_device_id": "system", 00:06:37.274 "dma_device_type": 1 00:06:37.274 }, 00:06:37.274 { 00:06:37.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.274 "dma_device_type": 2 00:06:37.274 } 00:06:37.274 ], 00:06:37.274 "driver_specific": {} 00:06:37.274 }, 00:06:37.274 { 00:06:37.274 "name": "Passthru0", 00:06:37.274 "aliases": [ 00:06:37.274 "77bfa83e-e21f-5741-9420-7248d2868930" 00:06:37.274 ], 00:06:37.274 "product_name": "passthru", 00:06:37.274 "block_size": 512, 00:06:37.274 "num_blocks": 16384, 00:06:37.274 "uuid": "77bfa83e-e21f-5741-9420-7248d2868930", 00:06:37.274 "assigned_rate_limits": { 00:06:37.274 "rw_ios_per_sec": 0, 00:06:37.274 "rw_mbytes_per_sec": 0, 00:06:37.274 "r_mbytes_per_sec": 0, 00:06:37.274 "w_mbytes_per_sec": 0 00:06:37.274 }, 00:06:37.274 "claimed": false, 00:06:37.274 "zoned": false, 00:06:37.274 "supported_io_types": { 00:06:37.274 "read": true, 00:06:37.274 "write": true, 00:06:37.274 "unmap": true, 00:06:37.274 "flush": true, 00:06:37.274 "reset": true, 00:06:37.274 "nvme_admin": false, 00:06:37.274 "nvme_io": false, 00:06:37.274 "nvme_io_md": false, 00:06:37.274 "write_zeroes": true, 00:06:37.274 "zcopy": true, 00:06:37.274 "get_zone_info": false, 00:06:37.274 "zone_management": false, 00:06:37.274 "zone_append": false, 00:06:37.274 "compare": false, 00:06:37.274 "compare_and_write": false, 00:06:37.274 "abort": true, 00:06:37.274 "seek_hole": false, 00:06:37.274 "seek_data": false, 00:06:37.274 "copy": true, 00:06:37.274 "nvme_iov_md": false 00:06:37.274 }, 00:06:37.274 "memory_domains": [ 00:06:37.274 { 00:06:37.274 "dma_device_id": "system", 00:06:37.274 "dma_device_type": 1 00:06:37.274 }, 00:06:37.274 { 00:06:37.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.275 "dma_device_type": 2 00:06:37.275 } 00:06:37.275 ], 00:06:37.275 "driver_specific": { 00:06:37.275 "passthru": { 00:06:37.275 "name": "Passthru0", 00:06:37.275 "base_bdev_name": "Malloc2" 00:06:37.275 } 00:06:37.275 } 00:06:37.275 } 00:06:37.275 ]' 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.275 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.532 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.532 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:37.532 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:37.532 11:31:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:37.532 00:06:37.532 real 0m0.361s 00:06:37.532 user 0m0.228s 00:06:37.532 sys 0m0.035s 00:06:37.532 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.532 ************************************ 00:06:37.532 END TEST rpc_daemon_integrity 00:06:37.532 ************************************ 00:06:37.532 11:31:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.532 11:31:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:37.532 11:31:36 rpc -- rpc/rpc.sh@84 -- # killprocess 62321 00:06:37.532 11:31:36 rpc -- common/autotest_common.sh@950 -- # '[' -z 62321 ']' 00:06:37.532 11:31:36 rpc -- common/autotest_common.sh@954 -- # kill -0 62321 00:06:37.532 11:31:36 rpc -- common/autotest_common.sh@955 -- # uname 00:06:37.532 11:31:36 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.532 11:31:36 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62321 00:06:37.532 11:31:36 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.532 killing process with pid 62321 00:06:37.532 11:31:36 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.532 11:31:36 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62321' 00:06:37.532 11:31:36 rpc -- common/autotest_common.sh@969 -- # kill 62321 00:06:37.532 11:31:36 rpc -- common/autotest_common.sh@974 -- # wait 62321 00:06:40.060 00:06:40.060 real 0m5.173s 00:06:40.060 user 0m5.788s 00:06:40.060 sys 0m0.847s 00:06:40.060 11:31:38 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.060 11:31:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.060 ************************************ 00:06:40.060 END TEST rpc 00:06:40.060 ************************************ 00:06:40.060 11:31:38 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:40.060 11:31:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.060 11:31:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.060 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:06:40.060 ************************************ 00:06:40.060 START TEST skip_rpc 00:06:40.060 ************************************ 00:06:40.060 11:31:38 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:40.060 * Looking for test storage... 00:06:40.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:40.060 11:31:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:40.060 11:31:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:40.060 11:31:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:40.060 11:31:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.060 11:31:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.060 11:31:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.060 ************************************ 00:06:40.060 START TEST skip_rpc 00:06:40.060 ************************************ 00:06:40.060 11:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:40.060 11:31:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62542 00:06:40.060 11:31:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.060 11:31:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:40.060 11:31:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:40.060 [2024-07-25 11:31:39.090618] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:40.060 [2024-07-25 11:31:39.091638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62542 ] 00:06:40.318 [2024-07-25 11:31:39.274828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.576 [2024-07-25 11:31:39.571530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62542 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 62542 ']' 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 62542 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62542 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62542' 00:06:45.861 killing process with pid 62542 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 62542 00:06:45.861 11:31:43 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 62542 00:06:47.762 00:06:47.762 real 0m7.347s 00:06:47.762 user 0m6.760s 00:06:47.762 sys 0m0.475s 00:06:47.762 11:31:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.762 11:31:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.762 ************************************ 00:06:47.762 END TEST skip_rpc 00:06:47.762 ************************************ 00:06:47.762 11:31:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:47.762 11:31:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.762 11:31:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.762 11:31:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.762 ************************************ 00:06:47.762 START TEST skip_rpc_with_json 00:06:47.762 ************************************ 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:47.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62646 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62646 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 62646 ']' 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.762 11:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.762 [2024-07-25 11:31:46.461073] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:47.762 [2024-07-25 11:31:46.461244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62646 ] 00:06:47.762 [2024-07-25 11:31:46.633075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.020 [2024-07-25 11:31:46.921031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.008 [2024-07-25 11:31:47.727430] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:49.008 request: 00:06:49.008 { 00:06:49.008 "trtype": "tcp", 00:06:49.008 "method": "nvmf_get_transports", 00:06:49.008 "req_id": 1 00:06:49.008 } 00:06:49.008 Got JSON-RPC error response 00:06:49.008 response: 00:06:49.008 { 00:06:49.008 "code": -19, 00:06:49.008 "message": "No such device" 00:06:49.008 } 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.008 [2024-07-25 11:31:47.739546] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.008 11:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:49.008 { 00:06:49.008 "subsystems": [ 00:06:49.008 { 00:06:49.008 "subsystem": "keyring", 00:06:49.008 "config": [] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "iobuf", 00:06:49.008 "config": [ 00:06:49.008 { 00:06:49.008 "method": "iobuf_set_options", 00:06:49.008 "params": { 00:06:49.008 "small_pool_count": 8192, 00:06:49.008 "large_pool_count": 1024, 00:06:49.008 "small_bufsize": 8192, 00:06:49.008 "large_bufsize": 135168 00:06:49.008 } 00:06:49.008 } 00:06:49.008 ] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "sock", 00:06:49.008 "config": [ 00:06:49.008 { 00:06:49.008 "method": "sock_set_default_impl", 00:06:49.008 "params": { 00:06:49.008 "impl_name": "posix" 00:06:49.008 } 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "method": "sock_impl_set_options", 00:06:49.008 "params": { 00:06:49.008 "impl_name": "ssl", 00:06:49.008 "recv_buf_size": 4096, 00:06:49.008 "send_buf_size": 4096, 00:06:49.008 "enable_recv_pipe": true, 00:06:49.008 "enable_quickack": false, 00:06:49.008 "enable_placement_id": 0, 00:06:49.008 "enable_zerocopy_send_server": true, 00:06:49.008 "enable_zerocopy_send_client": false, 00:06:49.008 "zerocopy_threshold": 0, 00:06:49.008 "tls_version": 0, 00:06:49.008 "enable_ktls": false 00:06:49.008 } 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "method": "sock_impl_set_options", 00:06:49.008 "params": { 00:06:49.008 "impl_name": "posix", 00:06:49.008 "recv_buf_size": 2097152, 00:06:49.008 "send_buf_size": 2097152, 00:06:49.008 "enable_recv_pipe": true, 00:06:49.008 "enable_quickack": false, 00:06:49.008 "enable_placement_id": 0, 00:06:49.008 "enable_zerocopy_send_server": true, 00:06:49.008 "enable_zerocopy_send_client": false, 00:06:49.008 "zerocopy_threshold": 0, 00:06:49.008 "tls_version": 0, 00:06:49.008 "enable_ktls": false 00:06:49.008 } 00:06:49.008 } 00:06:49.008 ] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "vmd", 00:06:49.008 "config": [] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "accel", 00:06:49.008 "config": [ 00:06:49.008 { 00:06:49.008 "method": "accel_set_options", 00:06:49.008 "params": { 00:06:49.008 "small_cache_size": 128, 00:06:49.008 "large_cache_size": 16, 00:06:49.008 "task_count": 2048, 00:06:49.008 "sequence_count": 2048, 00:06:49.008 "buf_count": 2048 00:06:49.008 } 00:06:49.008 } 00:06:49.008 ] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "bdev", 00:06:49.008 "config": [ 00:06:49.008 { 00:06:49.008 "method": "bdev_set_options", 00:06:49.008 "params": { 00:06:49.008 "bdev_io_pool_size": 65535, 00:06:49.008 "bdev_io_cache_size": 256, 00:06:49.008 "bdev_auto_examine": true, 00:06:49.008 "iobuf_small_cache_size": 128, 00:06:49.008 "iobuf_large_cache_size": 16 00:06:49.008 } 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "method": "bdev_raid_set_options", 00:06:49.008 "params": { 00:06:49.008 "process_window_size_kb": 1024, 00:06:49.008 "process_max_bandwidth_mb_sec": 0 00:06:49.008 } 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "method": "bdev_iscsi_set_options", 00:06:49.008 "params": { 00:06:49.008 "timeout_sec": 30 00:06:49.008 } 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "method": "bdev_nvme_set_options", 00:06:49.008 "params": { 00:06:49.008 "action_on_timeout": "none", 00:06:49.008 "timeout_us": 0, 00:06:49.008 "timeout_admin_us": 0, 00:06:49.008 "keep_alive_timeout_ms": 10000, 00:06:49.008 "arbitration_burst": 0, 00:06:49.008 "low_priority_weight": 0, 00:06:49.008 "medium_priority_weight": 0, 00:06:49.008 "high_priority_weight": 0, 00:06:49.008 "nvme_adminq_poll_period_us": 10000, 00:06:49.008 "nvme_ioq_poll_period_us": 0, 00:06:49.008 "io_queue_requests": 0, 00:06:49.008 "delay_cmd_submit": true, 00:06:49.008 "transport_retry_count": 4, 00:06:49.008 "bdev_retry_count": 3, 00:06:49.008 "transport_ack_timeout": 0, 00:06:49.008 "ctrlr_loss_timeout_sec": 0, 00:06:49.008 "reconnect_delay_sec": 0, 00:06:49.008 "fast_io_fail_timeout_sec": 0, 00:06:49.008 "disable_auto_failback": false, 00:06:49.008 "generate_uuids": false, 00:06:49.008 "transport_tos": 0, 00:06:49.008 "nvme_error_stat": false, 00:06:49.008 "rdma_srq_size": 0, 00:06:49.008 "io_path_stat": false, 00:06:49.008 "allow_accel_sequence": false, 00:06:49.008 "rdma_max_cq_size": 0, 00:06:49.008 "rdma_cm_event_timeout_ms": 0, 00:06:49.008 "dhchap_digests": [ 00:06:49.008 "sha256", 00:06:49.008 "sha384", 00:06:49.008 "sha512" 00:06:49.008 ], 00:06:49.008 "dhchap_dhgroups": [ 00:06:49.008 "null", 00:06:49.008 "ffdhe2048", 00:06:49.008 "ffdhe3072", 00:06:49.008 "ffdhe4096", 00:06:49.008 "ffdhe6144", 00:06:49.008 "ffdhe8192" 00:06:49.008 ] 00:06:49.008 } 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "method": "bdev_nvme_set_hotplug", 00:06:49.008 "params": { 00:06:49.008 "period_us": 100000, 00:06:49.008 "enable": false 00:06:49.008 } 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "method": "bdev_wait_for_examine" 00:06:49.008 } 00:06:49.008 ] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "scsi", 00:06:49.008 "config": null 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "scheduler", 00:06:49.008 "config": [ 00:06:49.008 { 00:06:49.008 "method": "framework_set_scheduler", 00:06:49.008 "params": { 00:06:49.008 "name": "static" 00:06:49.008 } 00:06:49.008 } 00:06:49.008 ] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "vhost_scsi", 00:06:49.008 "config": [] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "vhost_blk", 00:06:49.008 "config": [] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "ublk", 00:06:49.008 "config": [] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "nbd", 00:06:49.008 "config": [] 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "subsystem": "nvmf", 00:06:49.008 "config": [ 00:06:49.008 { 00:06:49.008 "method": "nvmf_set_config", 00:06:49.008 "params": { 00:06:49.008 "discovery_filter": "match_any", 00:06:49.008 "admin_cmd_passthru": { 00:06:49.008 "identify_ctrlr": false 00:06:49.008 } 00:06:49.008 } 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "method": "nvmf_set_max_subsystems", 00:06:49.008 "params": { 00:06:49.008 "max_subsystems": 1024 00:06:49.008 } 00:06:49.008 }, 00:06:49.008 { 00:06:49.008 "method": "nvmf_set_crdt", 00:06:49.008 "params": { 00:06:49.008 "crdt1": 0, 00:06:49.008 "crdt2": 0, 00:06:49.008 "crdt3": 0 00:06:49.008 } 00:06:49.008 }, 00:06:49.008 { 00:06:49.009 "method": "nvmf_create_transport", 00:06:49.009 "params": { 00:06:49.009 "trtype": "TCP", 00:06:49.009 "max_queue_depth": 128, 00:06:49.009 "max_io_qpairs_per_ctrlr": 127, 00:06:49.009 "in_capsule_data_size": 4096, 00:06:49.009 "max_io_size": 131072, 00:06:49.009 "io_unit_size": 131072, 00:06:49.009 "max_aq_depth": 128, 00:06:49.009 "num_shared_buffers": 511, 00:06:49.009 "buf_cache_size": 4294967295, 00:06:49.009 "dif_insert_or_strip": false, 00:06:49.009 "zcopy": false, 00:06:49.009 "c2h_success": true, 00:06:49.009 "sock_priority": 0, 00:06:49.009 "abort_timeout_sec": 1, 00:06:49.009 "ack_timeout": 0, 00:06:49.009 "data_wr_pool_size": 0 00:06:49.009 } 00:06:49.009 } 00:06:49.009 ] 00:06:49.009 }, 00:06:49.009 { 00:06:49.009 "subsystem": "iscsi", 00:06:49.009 "config": [ 00:06:49.009 { 00:06:49.009 "method": "iscsi_set_options", 00:06:49.009 "params": { 00:06:49.009 "node_base": "iqn.2016-06.io.spdk", 00:06:49.009 "max_sessions": 128, 00:06:49.009 "max_connections_per_session": 2, 00:06:49.009 "max_queue_depth": 64, 00:06:49.009 "default_time2wait": 2, 00:06:49.009 "default_time2retain": 20, 00:06:49.009 "first_burst_length": 8192, 00:06:49.009 "immediate_data": true, 00:06:49.009 "allow_duplicated_isid": false, 00:06:49.009 "error_recovery_level": 0, 00:06:49.009 "nop_timeout": 60, 00:06:49.009 "nop_in_interval": 30, 00:06:49.009 "disable_chap": false, 00:06:49.009 "require_chap": false, 00:06:49.009 "mutual_chap": false, 00:06:49.009 "chap_group": 0, 00:06:49.009 "max_large_datain_per_connection": 64, 00:06:49.009 "max_r2t_per_connection": 4, 00:06:49.009 "pdu_pool_size": 36864, 00:06:49.009 "immediate_data_pool_size": 16384, 00:06:49.009 "data_out_pool_size": 2048 00:06:49.009 } 00:06:49.009 } 00:06:49.009 ] 00:06:49.009 } 00:06:49.009 ] 00:06:49.009 } 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62646 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62646 ']' 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62646 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62646 00:06:49.009 killing process with pid 62646 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62646' 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62646 00:06:49.009 11:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62646 00:06:51.536 11:31:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62702 00:06:51.536 11:31:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:51.536 11:31:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62702 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62702 ']' 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62702 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62702 00:06:56.845 killing process with pid 62702 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62702' 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62702 00:06:56.845 11:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62702 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:58.844 ************************************ 00:06:58.844 END TEST skip_rpc_with_json 00:06:58.844 ************************************ 00:06:58.844 00:06:58.844 real 0m11.155s 00:06:58.844 user 0m10.599s 00:06:58.844 sys 0m0.991s 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:58.844 11:31:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:58.844 11:31:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.844 11:31:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.844 11:31:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.844 ************************************ 00:06:58.844 START TEST skip_rpc_with_delay 00:06:58.844 ************************************ 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:58.844 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:58.845 [2024-07-25 11:31:57.680486] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:58.845 [2024-07-25 11:31:57.680728] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:58.845 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:58.845 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.845 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.845 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.845 00:06:58.845 real 0m0.193s 00:06:58.845 user 0m0.112s 00:06:58.845 sys 0m0.079s 00:06:58.845 ************************************ 00:06:58.845 END TEST skip_rpc_with_delay 00:06:58.845 ************************************ 00:06:58.845 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.845 11:31:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:58.845 11:31:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:58.845 11:31:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:58.845 11:31:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:58.845 11:31:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.845 11:31:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.845 11:31:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.845 ************************************ 00:06:58.845 START TEST exit_on_failed_rpc_init 00:06:58.845 ************************************ 00:06:58.845 11:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:58.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.845 11:31:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62830 00:06:58.845 11:31:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.845 11:31:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62830 00:06:58.845 11:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 62830 ']' 00:06:58.845 11:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.845 11:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.845 11:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.845 11:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.845 11:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:59.123 [2024-07-25 11:31:57.939478] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.123 [2024-07-25 11:31:57.939970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62830 ] 00:06:59.123 [2024-07-25 11:31:58.122965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.380 [2024-07-25 11:31:58.401742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:00.314 11:31:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:00.314 [2024-07-25 11:31:59.353668] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:00.314 [2024-07-25 11:31:59.354497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62854 ] 00:07:00.572 [2024-07-25 11:31:59.536082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.830 [2024-07-25 11:31:59.816057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.830 [2024-07-25 11:31:59.816199] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:00.830 [2024-07-25 11:31:59.816235] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:00.830 [2024-07-25 11:31:59.816265] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62830 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 62830 ']' 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 62830 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62830 00:07:01.395 killing process with pid 62830 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62830' 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 62830 00:07:01.395 11:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 62830 00:07:03.945 00:07:03.945 real 0m4.806s 00:07:03.945 user 0m5.469s 00:07:03.945 sys 0m0.736s 00:07:03.945 11:32:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.945 ************************************ 00:07:03.945 END TEST exit_on_failed_rpc_init 00:07:03.945 ************************************ 00:07:03.945 11:32:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:03.945 11:32:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:03.945 00:07:03.945 real 0m23.802s 00:07:03.945 user 0m23.041s 00:07:03.945 sys 0m2.471s 00:07:03.945 ************************************ 00:07:03.945 END TEST skip_rpc 00:07:03.945 ************************************ 00:07:03.945 11:32:02 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.945 11:32:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.945 11:32:02 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:03.945 11:32:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.945 11:32:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.945 11:32:02 -- common/autotest_common.sh@10 -- # set +x 00:07:03.945 ************************************ 00:07:03.945 START TEST rpc_client 00:07:03.945 ************************************ 00:07:03.945 11:32:02 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:03.945 * Looking for test storage... 00:07:03.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:03.945 11:32:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:03.945 OK 00:07:03.945 11:32:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:03.945 00:07:03.945 real 0m0.165s 00:07:03.945 user 0m0.074s 00:07:03.945 sys 0m0.095s 00:07:03.945 11:32:02 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.945 11:32:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:03.945 ************************************ 00:07:03.945 END TEST rpc_client 00:07:03.945 ************************************ 00:07:03.945 11:32:02 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:03.945 11:32:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.945 11:32:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.945 11:32:02 -- common/autotest_common.sh@10 -- # set +x 00:07:03.945 ************************************ 00:07:03.945 START TEST json_config 00:07:03.945 ************************************ 00:07:03.945 11:32:02 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:04.214 11:32:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.214 11:32:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:04.214 11:32:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.214 11:32:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.214 11:32:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.214 11:32:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.214 11:32:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.214 11:32:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3a80fdc5-55c1-4700-bb2d-5636737b542b 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3a80fdc5-55c1-4700-bb2d-5636737b542b 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.214 11:32:03 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.214 11:32:03 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.214 11:32:03 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.214 11:32:03 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.214 11:32:03 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.214 11:32:03 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.215 11:32:03 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.215 11:32:03 json_config -- paths/export.sh@5 -- # export PATH 00:07:04.215 11:32:03 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.215 11:32:03 json_config -- nvmf/common.sh@47 -- # : 0 00:07:04.215 11:32:03 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:04.215 11:32:03 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:04.215 11:32:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.215 11:32:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.215 11:32:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.215 11:32:03 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:04.215 11:32:03 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:04.215 11:32:03 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:04.215 11:32:03 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:04.215 11:32:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:04.215 11:32:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:04.215 11:32:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:04.215 11:32:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:04.215 11:32:03 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:04.215 WARNING: No tests are enabled so not running JSON configuration tests 00:07:04.215 11:32:03 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:04.215 ************************************ 00:07:04.215 END TEST json_config 00:07:04.215 ************************************ 00:07:04.215 00:07:04.215 real 0m0.087s 00:07:04.215 user 0m0.034s 00:07:04.215 sys 0m0.051s 00:07:04.215 11:32:03 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.215 11:32:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.215 11:32:03 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:04.215 11:32:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.215 11:32:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.215 11:32:03 -- common/autotest_common.sh@10 -- # set +x 00:07:04.215 ************************************ 00:07:04.215 START TEST json_config_extra_key 00:07:04.215 ************************************ 00:07:04.215 11:32:03 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3a80fdc5-55c1-4700-bb2d-5636737b542b 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3a80fdc5-55c1-4700-bb2d-5636737b542b 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.215 11:32:03 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.215 11:32:03 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.215 11:32:03 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.215 11:32:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.215 11:32:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.215 11:32:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.215 11:32:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:04.215 11:32:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:04.215 11:32:03 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:04.215 INFO: launching applications... 00:07:04.215 Waiting for target to run... 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:04.215 11:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=63040 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 63040 /var/tmp/spdk_tgt.sock 00:07:04.215 11:32:03 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 63040 ']' 00:07:04.215 11:32:03 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:04.215 11:32:03 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:04.215 11:32:03 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.215 11:32:03 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:04.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:04.216 11:32:03 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.216 11:32:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:04.475 [2024-07-25 11:32:03.306176] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:04.475 [2024-07-25 11:32:03.307789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63040 ] 00:07:05.040 [2024-07-25 11:32:03.810093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.040 [2024-07-25 11:32:04.075992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.974 11:32:04 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.974 00:07:05.974 INFO: shutting down applications... 00:07:05.974 11:32:04 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:05.974 11:32:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:05.974 11:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:05.974 11:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:05.974 11:32:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:05.974 11:32:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:05.974 11:32:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 63040 ]] 00:07:05.975 11:32:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 63040 00:07:05.975 11:32:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:05.975 11:32:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:05.975 11:32:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:07:05.975 11:32:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:06.232 11:32:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:06.232 11:32:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:06.232 11:32:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:07:06.232 11:32:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:06.799 11:32:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:06.799 11:32:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:06.799 11:32:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:07:06.799 11:32:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:07.367 11:32:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:07.367 11:32:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:07.367 11:32:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:07:07.367 11:32:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:07.932 11:32:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:07.932 11:32:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:07.932 11:32:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:07:07.932 11:32:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:08.498 11:32:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:08.498 11:32:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:08.498 11:32:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:07:08.498 11:32:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:08.755 11:32:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:08.755 11:32:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:08.755 11:32:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:07:08.755 11:32:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:08.755 11:32:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:08.755 SPDK target shutdown done 00:07:08.755 Success 00:07:08.755 11:32:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:08.755 11:32:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:08.755 11:32:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:08.755 00:07:08.755 real 0m4.690s 00:07:08.755 user 0m4.085s 00:07:08.755 sys 0m0.678s 00:07:08.755 ************************************ 00:07:08.755 END TEST json_config_extra_key 00:07:08.755 ************************************ 00:07:08.755 11:32:07 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.755 11:32:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:09.013 11:32:07 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:09.013 11:32:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.013 11:32:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.013 11:32:07 -- common/autotest_common.sh@10 -- # set +x 00:07:09.013 ************************************ 00:07:09.013 START TEST alias_rpc 00:07:09.013 ************************************ 00:07:09.013 11:32:07 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:09.013 * Looking for test storage... 00:07:09.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:09.013 11:32:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:09.013 11:32:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:09.013 11:32:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=63143 00:07:09.013 11:32:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 63143 00:07:09.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.013 11:32:07 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 63143 ']' 00:07:09.013 11:32:07 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.013 11:32:07 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.013 11:32:07 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.013 11:32:07 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.013 11:32:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.013 [2024-07-25 11:32:08.042821] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:09.013 [2024-07-25 11:32:08.043051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63143 ] 00:07:09.271 [2024-07-25 11:32:08.224407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.529 [2024-07-25 11:32:08.523572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.463 11:32:09 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.463 11:32:09 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:10.463 11:32:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:10.721 11:32:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 63143 00:07:10.721 11:32:09 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 63143 ']' 00:07:10.721 11:32:09 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 63143 00:07:10.721 11:32:09 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:10.721 11:32:09 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.721 11:32:09 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63143 00:07:10.721 killing process with pid 63143 00:07:10.721 11:32:09 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.721 11:32:09 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.721 11:32:09 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63143' 00:07:10.721 11:32:09 alias_rpc -- common/autotest_common.sh@969 -- # kill 63143 00:07:10.721 11:32:09 alias_rpc -- common/autotest_common.sh@974 -- # wait 63143 00:07:13.322 ************************************ 00:07:13.323 END TEST alias_rpc 00:07:13.323 ************************************ 00:07:13.323 00:07:13.323 real 0m4.401s 00:07:13.323 user 0m4.507s 00:07:13.323 sys 0m0.662s 00:07:13.323 11:32:12 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.323 11:32:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.323 11:32:12 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:13.323 11:32:12 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:13.323 11:32:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.323 11:32:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.323 11:32:12 -- common/autotest_common.sh@10 -- # set +x 00:07:13.323 ************************************ 00:07:13.323 START TEST spdkcli_tcp 00:07:13.323 ************************************ 00:07:13.323 11:32:12 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:13.323 * Looking for test storage... 00:07:13.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:13.323 11:32:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:13.323 11:32:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:13.323 11:32:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:13.323 11:32:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:13.323 11:32:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:13.323 11:32:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:13.323 11:32:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:13.323 11:32:12 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:13.323 11:32:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.323 11:32:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63248 00:07:13.323 11:32:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63248 00:07:13.323 11:32:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:13.323 11:32:12 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 63248 ']' 00:07:13.323 11:32:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.323 11:32:12 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.581 11:32:12 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.581 11:32:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.581 11:32:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.581 [2024-07-25 11:32:12.498391] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:13.581 [2024-07-25 11:32:12.498851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63248 ] 00:07:13.840 [2024-07-25 11:32:12.675726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.098 [2024-07-25 11:32:12.972435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.098 [2024-07-25 11:32:12.972450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.033 11:32:13 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.033 11:32:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:15.033 11:32:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:15.033 11:32:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63265 00:07:15.033 11:32:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:15.292 [ 00:07:15.292 "bdev_malloc_delete", 00:07:15.292 "bdev_malloc_create", 00:07:15.292 "bdev_null_resize", 00:07:15.292 "bdev_null_delete", 00:07:15.292 "bdev_null_create", 00:07:15.292 "bdev_nvme_cuse_unregister", 00:07:15.292 "bdev_nvme_cuse_register", 00:07:15.292 "bdev_opal_new_user", 00:07:15.292 "bdev_opal_set_lock_state", 00:07:15.292 "bdev_opal_delete", 00:07:15.292 "bdev_opal_get_info", 00:07:15.292 "bdev_opal_create", 00:07:15.292 "bdev_nvme_opal_revert", 00:07:15.292 "bdev_nvme_opal_init", 00:07:15.292 "bdev_nvme_send_cmd", 00:07:15.292 "bdev_nvme_get_path_iostat", 00:07:15.292 "bdev_nvme_get_mdns_discovery_info", 00:07:15.292 "bdev_nvme_stop_mdns_discovery", 00:07:15.292 "bdev_nvme_start_mdns_discovery", 00:07:15.292 "bdev_nvme_set_multipath_policy", 00:07:15.292 "bdev_nvme_set_preferred_path", 00:07:15.292 "bdev_nvme_get_io_paths", 00:07:15.292 "bdev_nvme_remove_error_injection", 00:07:15.292 "bdev_nvme_add_error_injection", 00:07:15.292 "bdev_nvme_get_discovery_info", 00:07:15.292 "bdev_nvme_stop_discovery", 00:07:15.292 "bdev_nvme_start_discovery", 00:07:15.292 "bdev_nvme_get_controller_health_info", 00:07:15.292 "bdev_nvme_disable_controller", 00:07:15.292 "bdev_nvme_enable_controller", 00:07:15.292 "bdev_nvme_reset_controller", 00:07:15.292 "bdev_nvme_get_transport_statistics", 00:07:15.292 "bdev_nvme_apply_firmware", 00:07:15.292 "bdev_nvme_detach_controller", 00:07:15.292 "bdev_nvme_get_controllers", 00:07:15.292 "bdev_nvme_attach_controller", 00:07:15.292 "bdev_nvme_set_hotplug", 00:07:15.292 "bdev_nvme_set_options", 00:07:15.292 "bdev_passthru_delete", 00:07:15.292 "bdev_passthru_create", 00:07:15.292 "bdev_lvol_set_parent_bdev", 00:07:15.292 "bdev_lvol_set_parent", 00:07:15.292 "bdev_lvol_check_shallow_copy", 00:07:15.292 "bdev_lvol_start_shallow_copy", 00:07:15.292 "bdev_lvol_grow_lvstore", 00:07:15.292 "bdev_lvol_get_lvols", 00:07:15.292 "bdev_lvol_get_lvstores", 00:07:15.292 "bdev_lvol_delete", 00:07:15.292 "bdev_lvol_set_read_only", 00:07:15.292 "bdev_lvol_resize", 00:07:15.292 "bdev_lvol_decouple_parent", 00:07:15.292 "bdev_lvol_inflate", 00:07:15.292 "bdev_lvol_rename", 00:07:15.292 "bdev_lvol_clone_bdev", 00:07:15.292 "bdev_lvol_clone", 00:07:15.292 "bdev_lvol_snapshot", 00:07:15.292 "bdev_lvol_create", 00:07:15.292 "bdev_lvol_delete_lvstore", 00:07:15.292 "bdev_lvol_rename_lvstore", 00:07:15.292 "bdev_lvol_create_lvstore", 00:07:15.292 "bdev_raid_set_options", 00:07:15.292 "bdev_raid_remove_base_bdev", 00:07:15.292 "bdev_raid_add_base_bdev", 00:07:15.292 "bdev_raid_delete", 00:07:15.292 "bdev_raid_create", 00:07:15.292 "bdev_raid_get_bdevs", 00:07:15.292 "bdev_error_inject_error", 00:07:15.292 "bdev_error_delete", 00:07:15.292 "bdev_error_create", 00:07:15.292 "bdev_split_delete", 00:07:15.292 "bdev_split_create", 00:07:15.292 "bdev_delay_delete", 00:07:15.292 "bdev_delay_create", 00:07:15.292 "bdev_delay_update_latency", 00:07:15.292 "bdev_zone_block_delete", 00:07:15.292 "bdev_zone_block_create", 00:07:15.292 "blobfs_create", 00:07:15.292 "blobfs_detect", 00:07:15.292 "blobfs_set_cache_size", 00:07:15.292 "bdev_xnvme_delete", 00:07:15.292 "bdev_xnvme_create", 00:07:15.292 "bdev_aio_delete", 00:07:15.292 "bdev_aio_rescan", 00:07:15.292 "bdev_aio_create", 00:07:15.292 "bdev_ftl_set_property", 00:07:15.292 "bdev_ftl_get_properties", 00:07:15.292 "bdev_ftl_get_stats", 00:07:15.292 "bdev_ftl_unmap", 00:07:15.292 "bdev_ftl_unload", 00:07:15.292 "bdev_ftl_delete", 00:07:15.292 "bdev_ftl_load", 00:07:15.292 "bdev_ftl_create", 00:07:15.292 "bdev_virtio_attach_controller", 00:07:15.292 "bdev_virtio_scsi_get_devices", 00:07:15.292 "bdev_virtio_detach_controller", 00:07:15.292 "bdev_virtio_blk_set_hotplug", 00:07:15.292 "bdev_iscsi_delete", 00:07:15.292 "bdev_iscsi_create", 00:07:15.292 "bdev_iscsi_set_options", 00:07:15.292 "accel_error_inject_error", 00:07:15.292 "ioat_scan_accel_module", 00:07:15.292 "dsa_scan_accel_module", 00:07:15.292 "iaa_scan_accel_module", 00:07:15.292 "keyring_file_remove_key", 00:07:15.292 "keyring_file_add_key", 00:07:15.292 "keyring_linux_set_options", 00:07:15.292 "iscsi_get_histogram", 00:07:15.292 "iscsi_enable_histogram", 00:07:15.292 "iscsi_set_options", 00:07:15.292 "iscsi_get_auth_groups", 00:07:15.292 "iscsi_auth_group_remove_secret", 00:07:15.292 "iscsi_auth_group_add_secret", 00:07:15.292 "iscsi_delete_auth_group", 00:07:15.292 "iscsi_create_auth_group", 00:07:15.292 "iscsi_set_discovery_auth", 00:07:15.292 "iscsi_get_options", 00:07:15.292 "iscsi_target_node_request_logout", 00:07:15.292 "iscsi_target_node_set_redirect", 00:07:15.292 "iscsi_target_node_set_auth", 00:07:15.292 "iscsi_target_node_add_lun", 00:07:15.292 "iscsi_get_stats", 00:07:15.292 "iscsi_get_connections", 00:07:15.292 "iscsi_portal_group_set_auth", 00:07:15.292 "iscsi_start_portal_group", 00:07:15.292 "iscsi_delete_portal_group", 00:07:15.292 "iscsi_create_portal_group", 00:07:15.293 "iscsi_get_portal_groups", 00:07:15.293 "iscsi_delete_target_node", 00:07:15.293 "iscsi_target_node_remove_pg_ig_maps", 00:07:15.293 "iscsi_target_node_add_pg_ig_maps", 00:07:15.293 "iscsi_create_target_node", 00:07:15.293 "iscsi_get_target_nodes", 00:07:15.293 "iscsi_delete_initiator_group", 00:07:15.293 "iscsi_initiator_group_remove_initiators", 00:07:15.293 "iscsi_initiator_group_add_initiators", 00:07:15.293 "iscsi_create_initiator_group", 00:07:15.293 "iscsi_get_initiator_groups", 00:07:15.293 "nvmf_set_crdt", 00:07:15.293 "nvmf_set_config", 00:07:15.293 "nvmf_set_max_subsystems", 00:07:15.293 "nvmf_stop_mdns_prr", 00:07:15.293 "nvmf_publish_mdns_prr", 00:07:15.293 "nvmf_subsystem_get_listeners", 00:07:15.293 "nvmf_subsystem_get_qpairs", 00:07:15.293 "nvmf_subsystem_get_controllers", 00:07:15.293 "nvmf_get_stats", 00:07:15.293 "nvmf_get_transports", 00:07:15.293 "nvmf_create_transport", 00:07:15.293 "nvmf_get_targets", 00:07:15.293 "nvmf_delete_target", 00:07:15.293 "nvmf_create_target", 00:07:15.293 "nvmf_subsystem_allow_any_host", 00:07:15.293 "nvmf_subsystem_remove_host", 00:07:15.293 "nvmf_subsystem_add_host", 00:07:15.293 "nvmf_ns_remove_host", 00:07:15.293 "nvmf_ns_add_host", 00:07:15.293 "nvmf_subsystem_remove_ns", 00:07:15.293 "nvmf_subsystem_add_ns", 00:07:15.293 "nvmf_subsystem_listener_set_ana_state", 00:07:15.293 "nvmf_discovery_get_referrals", 00:07:15.293 "nvmf_discovery_remove_referral", 00:07:15.293 "nvmf_discovery_add_referral", 00:07:15.293 "nvmf_subsystem_remove_listener", 00:07:15.293 "nvmf_subsystem_add_listener", 00:07:15.293 "nvmf_delete_subsystem", 00:07:15.293 "nvmf_create_subsystem", 00:07:15.293 "nvmf_get_subsystems", 00:07:15.293 "env_dpdk_get_mem_stats", 00:07:15.293 "nbd_get_disks", 00:07:15.293 "nbd_stop_disk", 00:07:15.293 "nbd_start_disk", 00:07:15.293 "ublk_recover_disk", 00:07:15.293 "ublk_get_disks", 00:07:15.293 "ublk_stop_disk", 00:07:15.293 "ublk_start_disk", 00:07:15.293 "ublk_destroy_target", 00:07:15.293 "ublk_create_target", 00:07:15.293 "virtio_blk_create_transport", 00:07:15.293 "virtio_blk_get_transports", 00:07:15.293 "vhost_controller_set_coalescing", 00:07:15.293 "vhost_get_controllers", 00:07:15.293 "vhost_delete_controller", 00:07:15.293 "vhost_create_blk_controller", 00:07:15.293 "vhost_scsi_controller_remove_target", 00:07:15.293 "vhost_scsi_controller_add_target", 00:07:15.293 "vhost_start_scsi_controller", 00:07:15.293 "vhost_create_scsi_controller", 00:07:15.293 "thread_set_cpumask", 00:07:15.293 "framework_get_governor", 00:07:15.293 "framework_get_scheduler", 00:07:15.293 "framework_set_scheduler", 00:07:15.293 "framework_get_reactors", 00:07:15.293 "thread_get_io_channels", 00:07:15.293 "thread_get_pollers", 00:07:15.293 "thread_get_stats", 00:07:15.293 "framework_monitor_context_switch", 00:07:15.293 "spdk_kill_instance", 00:07:15.293 "log_enable_timestamps", 00:07:15.293 "log_get_flags", 00:07:15.293 "log_clear_flag", 00:07:15.293 "log_set_flag", 00:07:15.293 "log_get_level", 00:07:15.293 "log_set_level", 00:07:15.293 "log_get_print_level", 00:07:15.293 "log_set_print_level", 00:07:15.293 "framework_enable_cpumask_locks", 00:07:15.293 "framework_disable_cpumask_locks", 00:07:15.293 "framework_wait_init", 00:07:15.293 "framework_start_init", 00:07:15.293 "scsi_get_devices", 00:07:15.293 "bdev_get_histogram", 00:07:15.293 "bdev_enable_histogram", 00:07:15.293 "bdev_set_qos_limit", 00:07:15.293 "bdev_set_qd_sampling_period", 00:07:15.293 "bdev_get_bdevs", 00:07:15.293 "bdev_reset_iostat", 00:07:15.293 "bdev_get_iostat", 00:07:15.293 "bdev_examine", 00:07:15.293 "bdev_wait_for_examine", 00:07:15.293 "bdev_set_options", 00:07:15.293 "notify_get_notifications", 00:07:15.293 "notify_get_types", 00:07:15.293 "accel_get_stats", 00:07:15.293 "accel_set_options", 00:07:15.293 "accel_set_driver", 00:07:15.293 "accel_crypto_key_destroy", 00:07:15.293 "accel_crypto_keys_get", 00:07:15.293 "accel_crypto_key_create", 00:07:15.293 "accel_assign_opc", 00:07:15.293 "accel_get_module_info", 00:07:15.293 "accel_get_opc_assignments", 00:07:15.293 "vmd_rescan", 00:07:15.293 "vmd_remove_device", 00:07:15.293 "vmd_enable", 00:07:15.293 "sock_get_default_impl", 00:07:15.293 "sock_set_default_impl", 00:07:15.293 "sock_impl_set_options", 00:07:15.293 "sock_impl_get_options", 00:07:15.293 "iobuf_get_stats", 00:07:15.293 "iobuf_set_options", 00:07:15.293 "framework_get_pci_devices", 00:07:15.293 "framework_get_config", 00:07:15.293 "framework_get_subsystems", 00:07:15.293 "trace_get_info", 00:07:15.293 "trace_get_tpoint_group_mask", 00:07:15.293 "trace_disable_tpoint_group", 00:07:15.293 "trace_enable_tpoint_group", 00:07:15.293 "trace_clear_tpoint_mask", 00:07:15.293 "trace_set_tpoint_mask", 00:07:15.293 "keyring_get_keys", 00:07:15.293 "spdk_get_version", 00:07:15.293 "rpc_get_methods" 00:07:15.293 ] 00:07:15.293 11:32:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.293 11:32:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:15.293 11:32:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63248 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 63248 ']' 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 63248 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63248 00:07:15.293 killing process with pid 63248 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63248' 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 63248 00:07:15.293 11:32:14 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 63248 00:07:17.822 ************************************ 00:07:17.822 END TEST spdkcli_tcp 00:07:17.822 ************************************ 00:07:17.822 00:07:17.822 real 0m4.279s 00:07:17.822 user 0m7.509s 00:07:17.822 sys 0m0.690s 00:07:17.822 11:32:16 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.822 11:32:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.822 11:32:16 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:17.822 11:32:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.822 11:32:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.822 11:32:16 -- common/autotest_common.sh@10 -- # set +x 00:07:17.822 ************************************ 00:07:17.822 START TEST dpdk_mem_utility 00:07:17.822 ************************************ 00:07:17.822 11:32:16 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:17.822 * Looking for test storage... 00:07:17.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:17.822 11:32:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:17.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.822 11:32:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63362 00:07:17.822 11:32:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:17.822 11:32:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63362 00:07:17.822 11:32:16 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 63362 ']' 00:07:17.822 11:32:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.822 11:32:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.823 11:32:16 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.823 11:32:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.823 11:32:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:17.823 [2024-07-25 11:32:16.826674] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:17.823 [2024-07-25 11:32:16.827330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63362 ] 00:07:18.080 [2024-07-25 11:32:17.011100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.338 [2024-07-25 11:32:17.312880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.346 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.346 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:19.346 11:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:19.346 11:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:19.346 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.346 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:19.346 { 00:07:19.346 "filename": "/tmp/spdk_mem_dump.txt" 00:07:19.346 } 00:07:19.346 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.346 11:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:19.346 DPDK memory size 820.000000 MiB in 1 heap(s) 00:07:19.346 1 heaps totaling size 820.000000 MiB 00:07:19.346 size: 820.000000 MiB heap id: 0 00:07:19.346 end heaps---------- 00:07:19.346 8 mempools totaling size 598.116089 MiB 00:07:19.346 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:19.346 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:19.346 size: 84.521057 MiB name: bdev_io_63362 00:07:19.346 size: 51.011292 MiB name: evtpool_63362 00:07:19.346 size: 50.003479 MiB name: msgpool_63362 00:07:19.346 size: 21.763794 MiB name: PDU_Pool 00:07:19.346 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:19.346 size: 0.026123 MiB name: Session_Pool 00:07:19.346 end mempools------- 00:07:19.346 6 memzones totaling size 4.142822 MiB 00:07:19.346 size: 1.000366 MiB name: RG_ring_0_63362 00:07:19.346 size: 1.000366 MiB name: RG_ring_1_63362 00:07:19.346 size: 1.000366 MiB name: RG_ring_4_63362 00:07:19.346 size: 1.000366 MiB name: RG_ring_5_63362 00:07:19.346 size: 0.125366 MiB name: RG_ring_2_63362 00:07:19.346 size: 0.015991 MiB name: RG_ring_3_63362 00:07:19.346 end memzones------- 00:07:19.346 11:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:19.346 heap id: 0 total size: 820.000000 MiB number of busy elements: 298 number of free elements: 18 00:07:19.346 list of free elements. size: 18.452026 MiB 00:07:19.346 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:19.346 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:19.346 element at address: 0x200007000000 with size: 1.995972 MiB 00:07:19.346 element at address: 0x20000b200000 with size: 1.995972 MiB 00:07:19.346 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:19.346 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:19.346 element at address: 0x200019600000 with size: 0.999084 MiB 00:07:19.346 element at address: 0x200003e00000 with size: 0.996094 MiB 00:07:19.346 element at address: 0x200032200000 with size: 0.994324 MiB 00:07:19.346 element at address: 0x200018e00000 with size: 0.959656 MiB 00:07:19.346 element at address: 0x200019900040 with size: 0.936401 MiB 00:07:19.346 element at address: 0x200000200000 with size: 0.829956 MiB 00:07:19.346 element at address: 0x20001b000000 with size: 0.564636 MiB 00:07:19.346 element at address: 0x200019200000 with size: 0.487976 MiB 00:07:19.346 element at address: 0x200019a00000 with size: 0.485413 MiB 00:07:19.346 element at address: 0x200013800000 with size: 0.467896 MiB 00:07:19.346 element at address: 0x200028400000 with size: 0.390442 MiB 00:07:19.346 element at address: 0x200003a00000 with size: 0.351990 MiB 00:07:19.346 list of standard malloc elements. size: 199.283569 MiB 00:07:19.346 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:07:19.346 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:07:19.346 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:19.346 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:19.346 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:19.346 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:19.346 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:07:19.346 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:19.346 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:07:19.346 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:07:19.346 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:07:19.346 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:19.346 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:07:19.346 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200003aff980 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200013877c80 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200013877d80 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200013877e80 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200013877f80 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200013878080 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200013878180 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200013878280 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200013878380 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200013878480 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200013878580 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200019abc680 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200028463f40 with size: 0.000244 MiB 00:07:19.347 element at address: 0x200028464040 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20002846af80 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20002846b080 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20002846b180 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20002846b280 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20002846b380 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20002846b480 with size: 0.000244 MiB 00:07:19.347 element at address: 0x20002846b580 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846b680 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846b780 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846b880 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846b980 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846be80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846c080 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846c180 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846c280 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846c380 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846c480 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846c580 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846c680 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846c780 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846c880 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846c980 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846d080 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846d180 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846d280 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846d380 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846d480 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846d580 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846d680 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846d780 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846d880 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846d980 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846da80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846db80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846de80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846df80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846e080 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846e180 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846e280 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846e380 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846e480 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846e580 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846e680 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846e780 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846e880 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846e980 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846f080 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846f180 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846f280 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846f380 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846f480 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846f580 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846f680 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846f780 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846f880 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846f980 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:07:19.348 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:07:19.348 list of memzone associated elements. size: 602.264404 MiB 00:07:19.348 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:07:19.348 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:19.348 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:07:19.348 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:19.348 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:07:19.348 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63362_0 00:07:19.348 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:19.348 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63362_0 00:07:19.348 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:19.348 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63362_0 00:07:19.348 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:07:19.348 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:19.348 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:07:19.348 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:19.348 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:19.348 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63362 00:07:19.348 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:19.348 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63362 00:07:19.348 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:19.348 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63362 00:07:19.348 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:19.348 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:19.348 element at address: 0x200019abc780 with size: 1.008179 MiB 00:07:19.348 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:19.348 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:19.348 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:19.348 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:07:19.348 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:19.348 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:19.348 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63362 00:07:19.348 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:19.348 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63362 00:07:19.348 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:07:19.348 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63362 00:07:19.348 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:07:19.348 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63362 00:07:19.348 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:07:19.348 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63362 00:07:19.348 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:07:19.348 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:19.348 element at address: 0x200013878680 with size: 0.500549 MiB 00:07:19.348 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:19.348 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:07:19.348 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:19.348 element at address: 0x200003adf740 with size: 0.125549 MiB 00:07:19.348 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63362 00:07:19.348 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:07:19.348 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:19.348 element at address: 0x200028464140 with size: 0.023804 MiB 00:07:19.348 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:19.348 element at address: 0x200003adb500 with size: 0.016174 MiB 00:07:19.348 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63362 00:07:19.348 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:07:19.348 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:19.348 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:07:19.348 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63362 00:07:19.348 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:07:19.348 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63362 00:07:19.348 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:07:19.348 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:19.348 11:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:19.348 11:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63362 00:07:19.348 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 63362 ']' 00:07:19.348 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 63362 00:07:19.348 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:19.348 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.348 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63362 00:07:19.348 killing process with pid 63362 00:07:19.348 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.349 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.349 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63362' 00:07:19.349 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 63362 00:07:19.349 11:32:18 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 63362 00:07:21.876 ************************************ 00:07:21.876 END TEST dpdk_mem_utility 00:07:21.876 ************************************ 00:07:21.876 00:07:21.876 real 0m4.205s 00:07:21.876 user 0m4.150s 00:07:21.876 sys 0m0.655s 00:07:21.876 11:32:20 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.876 11:32:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:21.876 11:32:20 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:21.876 11:32:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.876 11:32:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.876 11:32:20 -- common/autotest_common.sh@10 -- # set +x 00:07:21.876 ************************************ 00:07:21.876 START TEST event 00:07:21.876 ************************************ 00:07:21.876 11:32:20 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:22.134 * Looking for test storage... 00:07:22.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:22.134 11:32:20 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:22.134 11:32:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:22.134 11:32:20 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:22.134 11:32:20 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:22.134 11:32:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.134 11:32:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 ************************************ 00:07:22.134 START TEST event_perf 00:07:22.134 ************************************ 00:07:22.134 11:32:20 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:22.134 Running I/O for 1 seconds...[2024-07-25 11:32:21.007806] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:22.134 [2024-07-25 11:32:21.008215] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63464 ] 00:07:22.396 [2024-07-25 11:32:21.188337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.396 [2024-07-25 11:32:21.437509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.396 [2024-07-25 11:32:21.437652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.396 [2024-07-25 11:32:21.437735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.396 [2024-07-25 11:32:21.437754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.305 Running I/O for 1 seconds... 00:07:24.305 lcore 0: 197462 00:07:24.305 lcore 1: 197462 00:07:24.305 lcore 2: 197458 00:07:24.305 lcore 3: 197461 00:07:24.305 done. 00:07:24.305 00:07:24.305 real 0m1.903s 00:07:24.305 user 0m4.633s 00:07:24.305 sys 0m0.144s 00:07:24.305 11:32:22 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.305 11:32:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.305 ************************************ 00:07:24.305 END TEST event_perf 00:07:24.305 ************************************ 00:07:24.305 11:32:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:24.305 11:32:22 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:24.305 11:32:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.305 11:32:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.305 ************************************ 00:07:24.305 START TEST event_reactor 00:07:24.305 ************************************ 00:07:24.305 11:32:22 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:24.305 [2024-07-25 11:32:22.968255] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:24.305 [2024-07-25 11:32:22.968467] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63510 ] 00:07:24.305 [2024-07-25 11:32:23.141173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.562 [2024-07-25 11:32:23.378553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.954 test_start 00:07:25.954 oneshot 00:07:25.954 tick 100 00:07:25.954 tick 100 00:07:25.954 tick 250 00:07:25.954 tick 100 00:07:25.954 tick 100 00:07:25.954 tick 100 00:07:25.954 tick 250 00:07:25.954 tick 500 00:07:25.954 tick 100 00:07:25.954 tick 100 00:07:25.954 tick 250 00:07:25.954 tick 100 00:07:25.954 tick 100 00:07:25.954 test_end 00:07:25.954 00:07:25.954 real 0m1.870s 00:07:25.954 user 0m1.640s 00:07:25.954 sys 0m0.118s 00:07:25.954 11:32:24 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.954 11:32:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 ************************************ 00:07:25.954 END TEST event_reactor 00:07:25.954 ************************************ 00:07:25.954 11:32:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:25.954 11:32:24 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:25.954 11:32:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.954 11:32:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 ************************************ 00:07:25.954 START TEST event_reactor_perf 00:07:25.954 ************************************ 00:07:25.954 11:32:24 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:25.954 [2024-07-25 11:32:24.893287] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:25.954 [2024-07-25 11:32:24.893465] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63552 ] 00:07:26.211 [2024-07-25 11:32:25.059864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.467 [2024-07-25 11:32:25.303478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.835 test_start 00:07:27.835 test_end 00:07:27.835 Performance: 274597 events per second 00:07:27.835 ************************************ 00:07:27.835 END TEST event_reactor_perf 00:07:27.835 ************************************ 00:07:27.835 00:07:27.835 real 0m1.877s 00:07:27.835 user 0m1.647s 00:07:27.835 sys 0m0.119s 00:07:27.835 11:32:26 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.835 11:32:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.835 11:32:26 event -- event/event.sh@49 -- # uname -s 00:07:27.835 11:32:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:27.835 11:32:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:27.835 11:32:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.835 11:32:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.835 11:32:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.835 ************************************ 00:07:27.835 START TEST event_scheduler 00:07:27.835 ************************************ 00:07:27.835 11:32:26 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:27.835 * Looking for test storage... 00:07:27.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:27.835 11:32:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:27.835 11:32:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63620 00:07:27.835 11:32:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:27.835 11:32:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:27.835 11:32:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63620 00:07:27.835 11:32:26 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 63620 ']' 00:07:27.835 11:32:26 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.835 11:32:26 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.835 11:32:26 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.835 11:32:26 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.835 11:32:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:28.092 [2024-07-25 11:32:26.983016] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:28.092 [2024-07-25 11:32:26.983236] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63620 ] 00:07:28.349 [2024-07-25 11:32:27.170626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.606 [2024-07-25 11:32:27.460372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.606 [2024-07-25 11:32:27.460548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.606 [2024-07-25 11:32:27.460679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.606 [2024-07-25 11:32:27.460979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.175 11:32:27 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.175 11:32:27 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:29.175 11:32:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:29.175 11:32:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.175 11:32:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.175 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:29.175 POWER: Cannot set governor of lcore 0 to userspace 00:07:29.175 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:29.175 POWER: Cannot set governor of lcore 0 to performance 00:07:29.175 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:29.175 POWER: Cannot set governor of lcore 0 to userspace 00:07:29.175 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:29.175 POWER: Cannot set governor of lcore 0 to userspace 00:07:29.175 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:29.175 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:29.175 POWER: Unable to set Power Management Environment for lcore 0 00:07:29.175 [2024-07-25 11:32:27.955037] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:29.175 [2024-07-25 11:32:27.955062] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:29.175 [2024-07-25 11:32:27.955082] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:29.176 [2024-07-25 11:32:27.955119] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:29.176 [2024-07-25 11:32:27.955137] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:29.176 [2024-07-25 11:32:27.955151] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:29.176 11:32:27 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.176 11:32:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:29.176 11:32:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.176 11:32:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 [2024-07-25 11:32:28.291579] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:29.468 11:32:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:29.468 11:32:28 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.468 11:32:28 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 ************************************ 00:07:29.468 START TEST scheduler_create_thread 00:07:29.468 ************************************ 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 2 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 3 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 4 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 5 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 6 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 7 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 8 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 9 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 10 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.468 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.469 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.469 11:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:29.469 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.469 11:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.402 11:32:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.402 11:32:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:30.402 11:32:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:30.402 11:32:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.402 11:32:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.775 ************************************ 00:07:31.775 END TEST scheduler_create_thread 00:07:31.775 ************************************ 00:07:31.775 11:32:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.775 00:07:31.775 real 0m2.140s 00:07:31.775 user 0m0.018s 00:07:31.775 sys 0m0.007s 00:07:31.775 11:32:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.775 11:32:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.775 11:32:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:31.776 11:32:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63620 00:07:31.776 11:32:30 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 63620 ']' 00:07:31.776 11:32:30 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 63620 00:07:31.776 11:32:30 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:31.776 11:32:30 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.776 11:32:30 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63620 00:07:31.776 killing process with pid 63620 00:07:31.776 11:32:30 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:31.776 11:32:30 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:31.776 11:32:30 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63620' 00:07:31.776 11:32:30 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 63620 00:07:31.776 11:32:30 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 63620 00:07:32.034 [2024-07-25 11:32:30.925967] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:33.406 00:07:33.406 real 0m5.419s 00:07:33.406 user 0m8.752s 00:07:33.406 sys 0m0.504s 00:07:33.406 ************************************ 00:07:33.406 END TEST event_scheduler 00:07:33.406 ************************************ 00:07:33.406 11:32:32 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.406 11:32:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:33.406 11:32:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:33.406 11:32:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:33.406 11:32:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.406 11:32:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.406 11:32:32 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.406 ************************************ 00:07:33.406 START TEST app_repeat 00:07:33.406 ************************************ 00:07:33.406 11:32:32 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63729 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63729' 00:07:33.406 Process app_repeat pid: 63729 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:33.406 spdk_app_start Round 0 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:33.406 11:32:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63729 /var/tmp/spdk-nbd.sock 00:07:33.406 11:32:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63729 ']' 00:07:33.406 11:32:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:33.406 11:32:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:33.406 11:32:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:33.406 11:32:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.406 11:32:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:33.406 [2024-07-25 11:32:32.327096] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:33.406 [2024-07-25 11:32:32.327325] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63729 ] 00:07:33.663 [2024-07-25 11:32:32.510635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.921 [2024-07-25 11:32:32.799420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.921 [2024-07-25 11:32:32.799431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.485 11:32:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.485 11:32:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:34.485 11:32:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.742 Malloc0 00:07:34.742 11:32:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:35.000 Malloc1 00:07:35.000 11:32:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:35.000 11:32:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:35.259 /dev/nbd0 00:07:35.259 11:32:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:35.259 11:32:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:35.259 11:32:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:35.259 11:32:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:35.259 11:32:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:35.259 11:32:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:35.259 11:32:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:35.259 11:32:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:35.259 11:32:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:35.259 11:32:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:35.259 11:32:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:35.259 1+0 records in 00:07:35.259 1+0 records out 00:07:35.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033406 s, 12.3 MB/s 00:07:35.259 11:32:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:35.517 11:32:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:35.517 11:32:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:35.517 11:32:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:35.517 11:32:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:35.517 11:32:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.517 11:32:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:35.517 11:32:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:35.775 /dev/nbd1 00:07:35.775 11:32:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:35.775 11:32:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:35.775 1+0 records in 00:07:35.775 1+0 records out 00:07:35.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296807 s, 13.8 MB/s 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:35.775 11:32:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:35.775 11:32:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.775 11:32:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:35.775 11:32:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.775 11:32:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.775 11:32:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:36.033 { 00:07:36.033 "nbd_device": "/dev/nbd0", 00:07:36.033 "bdev_name": "Malloc0" 00:07:36.033 }, 00:07:36.033 { 00:07:36.033 "nbd_device": "/dev/nbd1", 00:07:36.033 "bdev_name": "Malloc1" 00:07:36.033 } 00:07:36.033 ]' 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:36.033 { 00:07:36.033 "nbd_device": "/dev/nbd0", 00:07:36.033 "bdev_name": "Malloc0" 00:07:36.033 }, 00:07:36.033 { 00:07:36.033 "nbd_device": "/dev/nbd1", 00:07:36.033 "bdev_name": "Malloc1" 00:07:36.033 } 00:07:36.033 ]' 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:36.033 /dev/nbd1' 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:36.033 /dev/nbd1' 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:36.033 256+0 records in 00:07:36.033 256+0 records out 00:07:36.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490037 s, 214 MB/s 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:36.033 256+0 records in 00:07:36.033 256+0 records out 00:07:36.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311218 s, 33.7 MB/s 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:36.033 11:32:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:36.033 256+0 records in 00:07:36.033 256+0 records out 00:07:36.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0462973 s, 22.6 MB/s 00:07:36.033 11:32:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:36.033 11:32:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.033 11:32:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.034 11:32:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.600 11:32:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:37.165 11:32:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:37.165 11:32:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:37.165 11:32:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.165 11:32:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:37.165 11:32:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:37.165 11:32:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.165 11:32:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:37.165 11:32:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:37.165 11:32:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:37.165 11:32:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:37.165 11:32:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:37.165 11:32:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:37.165 11:32:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:37.730 11:32:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:39.105 [2024-07-25 11:32:37.742582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.105 [2024-07-25 11:32:37.980250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.105 [2024-07-25 11:32:37.980254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.365 [2024-07-25 11:32:38.171884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:39.365 [2024-07-25 11:32:38.172001] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:40.756 11:32:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:40.756 spdk_app_start Round 1 00:07:40.756 11:32:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:40.756 11:32:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63729 /var/tmp/spdk-nbd.sock 00:07:40.756 11:32:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63729 ']' 00:07:40.756 11:32:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:40.756 11:32:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:40.757 11:32:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:40.757 11:32:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.757 11:32:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:40.757 11:32:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.757 11:32:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:40.757 11:32:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:41.323 Malloc0 00:07:41.323 11:32:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:41.582 Malloc1 00:07:41.582 11:32:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.582 11:32:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:41.840 /dev/nbd0 00:07:41.840 11:32:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:41.840 11:32:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:41.840 1+0 records in 00:07:41.840 1+0 records out 00:07:41.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362499 s, 11.3 MB/s 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.840 11:32:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:41.840 11:32:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.840 11:32:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.840 11:32:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:42.099 /dev/nbd1 00:07:42.099 11:32:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:42.099 11:32:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:42.099 1+0 records in 00:07:42.099 1+0 records out 00:07:42.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378449 s, 10.8 MB/s 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.099 11:32:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:42.099 11:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.099 11:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.099 11:32:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.099 11:32:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.099 11:32:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.357 11:32:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:42.357 { 00:07:42.357 "nbd_device": "/dev/nbd0", 00:07:42.357 "bdev_name": "Malloc0" 00:07:42.357 }, 00:07:42.357 { 00:07:42.357 "nbd_device": "/dev/nbd1", 00:07:42.357 "bdev_name": "Malloc1" 00:07:42.357 } 00:07:42.357 ]' 00:07:42.357 11:32:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:42.357 { 00:07:42.357 "nbd_device": "/dev/nbd0", 00:07:42.357 "bdev_name": "Malloc0" 00:07:42.357 }, 00:07:42.357 { 00:07:42.357 "nbd_device": "/dev/nbd1", 00:07:42.357 "bdev_name": "Malloc1" 00:07:42.357 } 00:07:42.357 ]' 00:07:42.357 11:32:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:42.615 /dev/nbd1' 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:42.615 /dev/nbd1' 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:42.615 256+0 records in 00:07:42.615 256+0 records out 00:07:42.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110695 s, 94.7 MB/s 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:42.615 256+0 records in 00:07:42.615 256+0 records out 00:07:42.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310724 s, 33.7 MB/s 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:42.615 256+0 records in 00:07:42.615 256+0 records out 00:07:42.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.035263 s, 29.7 MB/s 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.615 11:32:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.881 11:32:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.881 11:32:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.881 11:32:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.881 11:32:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.881 11:32:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.881 11:32:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.881 11:32:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.881 11:32:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.881 11:32:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.881 11:32:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.139 11:32:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.397 11:32:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:43.397 11:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.397 11:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:43.656 11:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:43.656 11:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:43.656 11:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.656 11:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:43.656 11:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:43.656 11:32:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:43.656 11:32:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:43.656 11:32:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:43.656 11:32:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:43.656 11:32:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:43.915 11:32:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:45.288 [2024-07-25 11:32:44.203976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:45.547 [2024-07-25 11:32:44.454693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.547 [2024-07-25 11:32:44.454702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.805 [2024-07-25 11:32:44.658955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:45.805 [2024-07-25 11:32:44.659137] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:47.189 spdk_app_start Round 2 00:07:47.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:47.189 11:32:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:47.190 11:32:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:47.190 11:32:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63729 /var/tmp/spdk-nbd.sock 00:07:47.190 11:32:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63729 ']' 00:07:47.190 11:32:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:47.190 11:32:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.190 11:32:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:47.190 11:32:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.190 11:32:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:47.190 11:32:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.190 11:32:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:47.190 11:32:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:47.754 Malloc0 00:07:47.754 11:32:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:48.012 Malloc1 00:07:48.012 11:32:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.012 11:32:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:48.270 /dev/nbd0 00:07:48.270 11:32:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:48.270 11:32:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:48.270 11:32:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:48.270 11:32:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:48.270 11:32:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.270 11:32:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.270 11:32:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:48.271 11:32:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:48.271 11:32:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.271 11:32:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.271 11:32:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:48.271 1+0 records in 00:07:48.271 1+0 records out 00:07:48.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686713 s, 6.0 MB/s 00:07:48.271 11:32:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:48.271 11:32:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:48.271 11:32:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:48.271 11:32:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.271 11:32:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:48.271 11:32:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.271 11:32:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.271 11:32:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:48.529 /dev/nbd1 00:07:48.529 11:32:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:48.529 11:32:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:48.529 1+0 records in 00:07:48.529 1+0 records out 00:07:48.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537717 s, 7.6 MB/s 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.529 11:32:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:48.529 11:32:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.529 11:32:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.529 11:32:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.529 11:32:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.529 11:32:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:48.787 { 00:07:48.787 "nbd_device": "/dev/nbd0", 00:07:48.787 "bdev_name": "Malloc0" 00:07:48.787 }, 00:07:48.787 { 00:07:48.787 "nbd_device": "/dev/nbd1", 00:07:48.787 "bdev_name": "Malloc1" 00:07:48.787 } 00:07:48.787 ]' 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:48.787 { 00:07:48.787 "nbd_device": "/dev/nbd0", 00:07:48.787 "bdev_name": "Malloc0" 00:07:48.787 }, 00:07:48.787 { 00:07:48.787 "nbd_device": "/dev/nbd1", 00:07:48.787 "bdev_name": "Malloc1" 00:07:48.787 } 00:07:48.787 ]' 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:48.787 /dev/nbd1' 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:48.787 /dev/nbd1' 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.787 11:32:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:49.046 256+0 records in 00:07:49.046 256+0 records out 00:07:49.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00740901 s, 142 MB/s 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:49.046 256+0 records in 00:07:49.046 256+0 records out 00:07:49.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307396 s, 34.1 MB/s 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:49.046 256+0 records in 00:07:49.046 256+0 records out 00:07:49.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.035454 s, 29.6 MB/s 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.046 11:32:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:49.311 11:32:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:49.311 11:32:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:49.311 11:32:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:49.311 11:32:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.311 11:32:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.311 11:32:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:49.311 11:32:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:49.311 11:32:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.311 11:32:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.311 11:32:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.604 11:32:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:49.869 11:32:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:49.869 11:32:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:50.127 11:32:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:51.501 [2024-07-25 11:32:50.399361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.760 [2024-07-25 11:32:50.653463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.760 [2024-07-25 11:32:50.653471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.018 [2024-07-25 11:32:50.897342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:52.018 [2024-07-25 11:32:50.897475] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:53.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:53.442 11:32:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63729 /var/tmp/spdk-nbd.sock 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63729 ']' 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:53.442 11:32:52 event.app_repeat -- event/event.sh@39 -- # killprocess 63729 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 63729 ']' 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 63729 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63729 00:07:53.442 killing process with pid 63729 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63729' 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@969 -- # kill 63729 00:07:53.442 11:32:52 event.app_repeat -- common/autotest_common.sh@974 -- # wait 63729 00:07:54.814 spdk_app_start is called in Round 0. 00:07:54.814 Shutdown signal received, stop current app iteration 00:07:54.814 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:54.814 spdk_app_start is called in Round 1. 00:07:54.814 Shutdown signal received, stop current app iteration 00:07:54.814 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:54.814 spdk_app_start is called in Round 2. 00:07:54.814 Shutdown signal received, stop current app iteration 00:07:54.814 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:54.814 spdk_app_start is called in Round 3. 00:07:54.814 Shutdown signal received, stop current app iteration 00:07:54.814 ************************************ 00:07:54.814 END TEST app_repeat 00:07:54.814 ************************************ 00:07:54.814 11:32:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:54.814 11:32:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:54.814 00:07:54.814 real 0m21.486s 00:07:54.814 user 0m45.645s 00:07:54.814 sys 0m3.219s 00:07:54.814 11:32:53 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.814 11:32:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:54.814 11:32:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:54.814 11:32:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:54.814 11:32:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.814 11:32:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.814 11:32:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:54.814 ************************************ 00:07:54.814 START TEST cpu_locks 00:07:54.814 ************************************ 00:07:54.814 11:32:53 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:55.072 * Looking for test storage... 00:07:55.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:55.072 11:32:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:55.072 11:32:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:55.072 11:32:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:55.072 11:32:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:55.072 11:32:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.072 11:32:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.072 11:32:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.072 ************************************ 00:07:55.072 START TEST default_locks 00:07:55.072 ************************************ 00:07:55.072 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:55.072 11:32:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64185 00:07:55.072 11:32:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:55.072 11:32:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64185 00:07:55.072 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 64185 ']' 00:07:55.072 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.072 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.072 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.072 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.072 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.072 [2024-07-25 11:32:54.022960] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:55.072 [2024-07-25 11:32:54.023149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64185 ] 00:07:55.331 [2024-07-25 11:32:54.201761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.588 [2024-07-25 11:32:54.490954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.521 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.521 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:56.521 11:32:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64185 00:07:56.521 11:32:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64185 00:07:56.521 11:32:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64185 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 64185 ']' 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 64185 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64185 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.779 killing process with pid 64185 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64185' 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 64185 00:07:56.779 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 64185 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64185 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64185 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 64185 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 64185 ']' 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.307 ERROR: process (pid: 64185) is no longer running 00:07:59.307 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64185) - No such process 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:59.307 00:07:59.307 real 0m4.219s 00:07:59.307 user 0m4.224s 00:07:59.307 sys 0m0.751s 00:07:59.307 ************************************ 00:07:59.307 END TEST default_locks 00:07:59.307 ************************************ 00:07:59.307 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.308 11:32:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.308 11:32:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:59.308 11:32:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.308 11:32:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.308 11:32:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.308 ************************************ 00:07:59.308 START TEST default_locks_via_rpc 00:07:59.308 ************************************ 00:07:59.308 11:32:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:59.308 11:32:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64260 00:07:59.308 11:32:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64260 00:07:59.308 11:32:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64260 ']' 00:07:59.308 11:32:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:59.308 11:32:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.308 11:32:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.308 11:32:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.308 11:32:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.308 11:32:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.308 [2024-07-25 11:32:58.303510] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:59.308 [2024-07-25 11:32:58.303711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64260 ] 00:07:59.566 [2024-07-25 11:32:58.480407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.823 [2024-07-25 11:32:58.736610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64260 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64260 00:08:00.757 11:32:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:01.014 11:32:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64260 00:08:01.014 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 64260 ']' 00:08:01.014 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 64260 00:08:01.014 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:01.014 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.014 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64260 00:08:01.014 killing process with pid 64260 00:08:01.015 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.015 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.015 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64260' 00:08:01.015 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 64260 00:08:01.015 11:32:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 64260 00:08:03.545 00:08:03.545 real 0m3.933s 00:08:03.545 user 0m3.930s 00:08:03.545 sys 0m0.708s 00:08:03.545 11:33:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.545 11:33:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.545 ************************************ 00:08:03.545 END TEST default_locks_via_rpc 00:08:03.545 ************************************ 00:08:03.545 11:33:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:03.545 11:33:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.545 11:33:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.545 11:33:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.545 ************************************ 00:08:03.545 START TEST non_locking_app_on_locked_coremask 00:08:03.545 ************************************ 00:08:03.545 11:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:03.545 11:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64334 00:08:03.545 11:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64334 /var/tmp/spdk.sock 00:08:03.546 11:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:03.546 11:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64334 ']' 00:08:03.546 11:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.546 11:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.546 11:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.546 11:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.546 11:33:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.546 [2024-07-25 11:33:02.282576] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:03.546 [2024-07-25 11:33:02.282750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64334 ] 00:08:03.546 [2024-07-25 11:33:02.443847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.803 [2024-07-25 11:33:02.688226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64356 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64356 /var/tmp/spdk2.sock 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64356 ']' 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.738 11:33:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.738 [2024-07-25 11:33:03.616893] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:04.738 [2024-07-25 11:33:03.617400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64356 ] 00:08:04.996 [2024-07-25 11:33:03.811907] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:04.996 [2024-07-25 11:33:03.816025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.254 [2024-07-25 11:33:04.291003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.778 11:33:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.778 11:33:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:07.778 11:33:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64334 00:08:07.778 11:33:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64334 00:08:07.778 11:33:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:08.344 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64334 00:08:08.344 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64334 ']' 00:08:08.344 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64334 00:08:08.344 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:08.344 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.344 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64334 00:08:08.344 killing process with pid 64334 00:08:08.344 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.344 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.345 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64334' 00:08:08.345 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64334 00:08:08.345 11:33:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64334 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64356 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64356 ']' 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64356 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64356 00:08:13.639 killing process with pid 64356 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64356' 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64356 00:08:13.639 11:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64356 00:08:15.538 00:08:15.538 real 0m11.991s 00:08:15.538 user 0m12.388s 00:08:15.538 sys 0m1.583s 00:08:15.538 11:33:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.538 11:33:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:15.538 ************************************ 00:08:15.538 END TEST non_locking_app_on_locked_coremask 00:08:15.538 ************************************ 00:08:15.538 11:33:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:15.538 11:33:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:15.538 11:33:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.538 11:33:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:15.538 ************************************ 00:08:15.538 START TEST locking_app_on_unlocked_coremask 00:08:15.538 ************************************ 00:08:15.538 11:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:15.538 11:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64510 00:08:15.538 11:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64510 /var/tmp/spdk.sock 00:08:15.538 11:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:15.538 11:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64510 ']' 00:08:15.538 11:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.538 11:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.538 11:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.538 11:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.538 11:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:15.538 [2024-07-25 11:33:14.349863] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:15.538 [2024-07-25 11:33:14.350087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64510 ] 00:08:15.538 [2024-07-25 11:33:14.529896] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:15.538 [2024-07-25 11:33:14.529984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.797 [2024-07-25 11:33:14.788392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64526 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64526 /var/tmp/spdk2.sock 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64526 ']' 00:08:16.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.729 11:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.729 [2024-07-25 11:33:15.775498] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:16.729 [2024-07-25 11:33:15.775721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64526 ] 00:08:17.011 [2024-07-25 11:33:15.968162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.576 [2024-07-25 11:33:16.470668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.105 11:33:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.105 11:33:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:20.105 11:33:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64526 00:08:20.105 11:33:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64526 00:08:20.105 11:33:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:20.373 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64510 00:08:20.373 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64510 ']' 00:08:20.373 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64510 00:08:20.373 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:20.373 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.373 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64510 00:08:20.652 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.652 killing process with pid 64510 00:08:20.652 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.652 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64510' 00:08:20.652 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64510 00:08:20.652 11:33:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64510 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64526 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64526 ']' 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64526 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64526 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:25.916 killing process with pid 64526 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64526' 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64526 00:08:25.916 11:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64526 00:08:27.819 00:08:27.819 real 0m12.152s 00:08:27.819 user 0m12.700s 00:08:27.819 sys 0m1.573s 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.819 ************************************ 00:08:27.819 END TEST locking_app_on_unlocked_coremask 00:08:27.819 ************************************ 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.819 11:33:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:27.819 11:33:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.819 11:33:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.819 11:33:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:27.819 ************************************ 00:08:27.819 START TEST locking_app_on_locked_coremask 00:08:27.819 ************************************ 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64680 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64680 /var/tmp/spdk.sock 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64680 ']' 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.819 11:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.819 [2024-07-25 11:33:26.542217] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:27.819 [2024-07-25 11:33:26.542418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64680 ] 00:08:27.819 [2024-07-25 11:33:26.719009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.077 [2024-07-25 11:33:26.957992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64696 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64696 /var/tmp/spdk2.sock 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64696 /var/tmp/spdk2.sock 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64696 /var/tmp/spdk2.sock 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64696 ']' 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.009 11:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:29.009 [2024-07-25 11:33:27.909813] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:29.009 [2024-07-25 11:33:27.910003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64696 ] 00:08:29.267 [2024-07-25 11:33:28.098902] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64680 has claimed it. 00:08:29.267 [2024-07-25 11:33:28.098996] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:29.525 ERROR: process (pid: 64696) is no longer running 00:08:29.525 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64696) - No such process 00:08:29.525 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.525 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:29.525 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:29.525 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:29.525 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:29.525 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:29.525 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64680 00:08:29.525 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:29.525 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64680 00:08:30.090 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64680 00:08:30.090 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64680 ']' 00:08:30.090 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64680 00:08:30.090 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:30.090 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.090 11:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64680 00:08:30.090 killing process with pid 64680 00:08:30.090 11:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.090 11:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.090 11:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64680' 00:08:30.090 11:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64680 00:08:30.090 11:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64680 00:08:32.617 00:08:32.618 real 0m4.889s 00:08:32.618 user 0m5.155s 00:08:32.618 sys 0m0.911s 00:08:32.618 11:33:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.618 ************************************ 00:08:32.618 END TEST locking_app_on_locked_coremask 00:08:32.618 ************************************ 00:08:32.618 11:33:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:32.618 11:33:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:32.618 11:33:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:32.618 11:33:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.618 11:33:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:32.618 ************************************ 00:08:32.618 START TEST locking_overlapped_coremask 00:08:32.618 ************************************ 00:08:32.618 11:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:32.618 11:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64771 00:08:32.618 11:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:32.618 11:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64771 /var/tmp/spdk.sock 00:08:32.618 11:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64771 ']' 00:08:32.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.618 11:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.618 11:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.618 11:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.618 11:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.618 11:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:32.618 [2024-07-25 11:33:31.471640] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:32.618 [2024-07-25 11:33:31.471956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64771 ] 00:08:32.618 [2024-07-25 11:33:31.652266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.875 [2024-07-25 11:33:31.916651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.875 [2024-07-25 11:33:31.916774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.875 [2024-07-25 11:33:31.916784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64790 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64790 /var/tmp/spdk2.sock 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64790 /var/tmp/spdk2.sock 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64790 /var/tmp/spdk2.sock 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64790 ']' 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.808 11:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:33.808 [2024-07-25 11:33:32.851691] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:33.808 [2024-07-25 11:33:32.851887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64790 ] 00:08:34.065 [2024-07-25 11:33:33.032371] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64771 has claimed it. 00:08:34.065 [2024-07-25 11:33:33.032453] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:34.629 ERROR: process (pid: 64790) is no longer running 00:08:34.629 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64790) - No such process 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64771 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 64771 ']' 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 64771 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64771 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.629 killing process with pid 64771 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64771' 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 64771 00:08:34.629 11:33:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 64771 00:08:37.158 00:08:37.158 real 0m4.464s 00:08:37.158 user 0m11.509s 00:08:37.158 sys 0m0.696s 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:37.158 ************************************ 00:08:37.158 END TEST locking_overlapped_coremask 00:08:37.158 ************************************ 00:08:37.158 11:33:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:37.158 11:33:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.158 11:33:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.158 11:33:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.158 ************************************ 00:08:37.158 START TEST locking_overlapped_coremask_via_rpc 00:08:37.158 ************************************ 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64854 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64854 /var/tmp/spdk.sock 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64854 ']' 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.158 11:33:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.158 [2024-07-25 11:33:35.996128] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:37.158 [2024-07-25 11:33:35.996337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64854 ] 00:08:37.158 [2024-07-25 11:33:36.174330] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:37.158 [2024-07-25 11:33:36.174408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.416 [2024-07-25 11:33:36.423756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.416 [2024-07-25 11:33:36.423859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.416 [2024-07-25 11:33:36.423870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64878 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64878 /var/tmp/spdk2.sock 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64878 ']' 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.349 11:33:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.349 [2024-07-25 11:33:37.338093] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:38.349 [2024-07-25 11:33:37.338283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64878 ] 00:08:38.640 [2024-07-25 11:33:37.520018] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:38.640 [2024-07-25 11:33:37.520108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.207 [2024-07-25 11:33:38.003754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.207 [2024-07-25 11:33:38.007062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.207 [2024-07-25 11:33:38.007088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.739 [2024-07-25 11:33:40.180180] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64854 has claimed it. 00:08:41.739 request: 00:08:41.739 { 00:08:41.739 "method": "framework_enable_cpumask_locks", 00:08:41.739 "req_id": 1 00:08:41.739 } 00:08:41.739 Got JSON-RPC error response 00:08:41.739 response: 00:08:41.739 { 00:08:41.739 "code": -32603, 00:08:41.739 "message": "Failed to claim CPU core: 2" 00:08:41.739 } 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64854 /var/tmp/spdk.sock 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64854 ']' 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64878 /var/tmp/spdk2.sock 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64878 ']' 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:41.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:41.739 ************************************ 00:08:41.739 END TEST locking_overlapped_coremask_via_rpc 00:08:41.739 ************************************ 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:41.739 00:08:41.739 real 0m4.890s 00:08:41.739 user 0m1.700s 00:08:41.739 sys 0m0.274s 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.739 11:33:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.024 11:33:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:42.024 11:33:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64854 ]] 00:08:42.024 11:33:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64854 00:08:42.024 11:33:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64854 ']' 00:08:42.024 11:33:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64854 00:08:42.024 11:33:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:42.024 11:33:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.024 11:33:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64854 00:08:42.024 killing process with pid 64854 00:08:42.024 11:33:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.024 11:33:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.024 11:33:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64854' 00:08:42.024 11:33:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64854 00:08:42.024 11:33:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64854 00:08:44.581 11:33:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64878 ]] 00:08:44.581 11:33:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64878 00:08:44.581 11:33:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64878 ']' 00:08:44.581 11:33:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64878 00:08:44.581 11:33:43 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:44.581 11:33:43 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.581 11:33:43 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64878 00:08:44.581 killing process with pid 64878 00:08:44.581 11:33:43 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:44.581 11:33:43 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:44.581 11:33:43 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64878' 00:08:44.581 11:33:43 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64878 00:08:44.581 11:33:43 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64878 00:08:46.479 11:33:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:46.479 Process with pid 64854 is not found 00:08:46.479 Process with pid 64878 is not found 00:08:46.479 11:33:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:46.479 11:33:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64854 ]] 00:08:46.479 11:33:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64854 00:08:46.479 11:33:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64854 ']' 00:08:46.479 11:33:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64854 00:08:46.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64854) - No such process 00:08:46.479 11:33:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64854 is not found' 00:08:46.479 11:33:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64878 ]] 00:08:46.479 11:33:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64878 00:08:46.479 11:33:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64878 ']' 00:08:46.479 11:33:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64878 00:08:46.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64878) - No such process 00:08:46.479 11:33:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64878 is not found' 00:08:46.479 11:33:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:46.479 00:08:46.479 real 0m51.609s 00:08:46.479 user 1m27.002s 00:08:46.479 sys 0m7.735s 00:08:46.479 11:33:45 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.479 11:33:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 ************************************ 00:08:46.479 END TEST cpu_locks 00:08:46.479 ************************************ 00:08:46.479 00:08:46.479 real 1m24.581s 00:08:46.479 user 2m29.452s 00:08:46.479 sys 0m12.095s 00:08:46.479 11:33:45 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.479 11:33:45 event -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 ************************************ 00:08:46.479 END TEST event 00:08:46.479 ************************************ 00:08:46.479 11:33:45 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:46.479 11:33:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:46.479 11:33:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.479 11:33:45 -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 ************************************ 00:08:46.479 START TEST thread 00:08:46.479 ************************************ 00:08:46.479 11:33:45 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:46.737 * Looking for test storage... 00:08:46.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:46.737 11:33:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:46.737 11:33:45 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:46.737 11:33:45 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.737 11:33:45 thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.737 ************************************ 00:08:46.737 START TEST thread_poller_perf 00:08:46.737 ************************************ 00:08:46.737 11:33:45 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:46.737 [2024-07-25 11:33:45.615158] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:46.737 [2024-07-25 11:33:45.615307] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65065 ] 00:08:46.737 [2024-07-25 11:33:45.788329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.303 [2024-07-25 11:33:46.079959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.303 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:48.678 ====================================== 00:08:48.678 busy:2212948642 (cyc) 00:08:48.678 total_run_count: 287000 00:08:48.678 tsc_hz: 2200000000 (cyc) 00:08:48.678 ====================================== 00:08:48.678 poller_cost: 7710 (cyc), 3504 (nsec) 00:08:48.678 00:08:48.678 real 0m1.921s 00:08:48.678 user 0m1.699s 00:08:48.678 sys 0m0.112s 00:08:48.678 11:33:47 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.678 ************************************ 00:08:48.678 END TEST thread_poller_perf 00:08:48.678 ************************************ 00:08:48.678 11:33:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:48.678 11:33:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:48.678 11:33:47 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:48.678 11:33:47 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.678 11:33:47 thread -- common/autotest_common.sh@10 -- # set +x 00:08:48.678 ************************************ 00:08:48.678 START TEST thread_poller_perf 00:08:48.678 ************************************ 00:08:48.678 11:33:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:48.678 [2024-07-25 11:33:47.595862] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:48.678 [2024-07-25 11:33:47.596056] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65107 ] 00:08:48.936 [2024-07-25 11:33:47.776007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.194 [2024-07-25 11:33:48.068554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.194 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:50.569 ====================================== 00:08:50.569 busy:2205068745 (cyc) 00:08:50.569 total_run_count: 3687000 00:08:50.569 tsc_hz: 2200000000 (cyc) 00:08:50.569 ====================================== 00:08:50.569 poller_cost: 598 (cyc), 271 (nsec) 00:08:50.569 00:08:50.569 real 0m1.942s 00:08:50.569 user 0m1.701s 00:08:50.569 sys 0m0.129s 00:08:50.569 11:33:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.569 ************************************ 00:08:50.569 END TEST thread_poller_perf 00:08:50.569 ************************************ 00:08:50.569 11:33:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:50.569 11:33:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:50.569 00:08:50.569 real 0m4.050s 00:08:50.569 user 0m3.460s 00:08:50.569 sys 0m0.362s 00:08:50.569 11:33:49 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.569 ************************************ 00:08:50.569 11:33:49 thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.569 END TEST thread 00:08:50.569 ************************************ 00:08:50.569 11:33:49 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:50.569 11:33:49 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:50.569 11:33:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:50.569 11:33:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.569 11:33:49 -- common/autotest_common.sh@10 -- # set +x 00:08:50.569 ************************************ 00:08:50.569 START TEST app_cmdline 00:08:50.569 ************************************ 00:08:50.569 11:33:49 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:50.827 * Looking for test storage... 00:08:50.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:50.827 11:33:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:50.827 11:33:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=65188 00:08:50.827 11:33:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:50.827 11:33:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 65188 00:08:50.827 11:33:49 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 65188 ']' 00:08:50.827 11:33:49 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.827 11:33:49 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.827 11:33:49 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.827 11:33:49 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.827 11:33:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:50.827 [2024-07-25 11:33:49.825781] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:50.827 [2024-07-25 11:33:49.826000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65188 ] 00:08:51.085 [2024-07-25 11:33:50.009144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.343 [2024-07-25 11:33:50.302374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.277 11:33:51 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.277 11:33:51 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:52.277 11:33:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:52.535 { 00:08:52.535 "version": "SPDK v24.09-pre git sha1 704257090", 00:08:52.535 "fields": { 00:08:52.535 "major": 24, 00:08:52.535 "minor": 9, 00:08:52.535 "patch": 0, 00:08:52.535 "suffix": "-pre", 00:08:52.535 "commit": "704257090" 00:08:52.535 } 00:08:52.535 } 00:08:52.535 11:33:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:52.535 11:33:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:52.535 11:33:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:52.535 11:33:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:52.535 11:33:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:52.535 11:33:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.535 11:33:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:52.535 11:33:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:52.535 11:33:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:52.535 11:33:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.535 11:33:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:52.535 11:33:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:52.535 11:33:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:52.535 11:33:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:52.536 11:33:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:52.536 11:33:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.536 11:33:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.536 11:33:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.536 11:33:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.536 11:33:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.536 11:33:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.536 11:33:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.536 11:33:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:52.536 11:33:51 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:52.793 request: 00:08:52.793 { 00:08:52.793 "method": "env_dpdk_get_mem_stats", 00:08:52.794 "req_id": 1 00:08:52.794 } 00:08:52.794 Got JSON-RPC error response 00:08:52.794 response: 00:08:52.794 { 00:08:52.794 "code": -32601, 00:08:52.794 "message": "Method not found" 00:08:52.794 } 00:08:52.794 11:33:51 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:52.794 11:33:51 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:52.794 11:33:51 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:52.794 11:33:51 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:52.794 11:33:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 65188 00:08:52.794 11:33:51 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 65188 ']' 00:08:52.794 11:33:51 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 65188 00:08:52.794 11:33:51 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:52.794 11:33:51 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.794 11:33:51 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65188 00:08:53.052 killing process with pid 65188 00:08:53.052 11:33:51 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.052 11:33:51 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.052 11:33:51 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65188' 00:08:53.052 11:33:51 app_cmdline -- common/autotest_common.sh@969 -- # kill 65188 00:08:53.052 11:33:51 app_cmdline -- common/autotest_common.sh@974 -- # wait 65188 00:08:55.619 ************************************ 00:08:55.619 END TEST app_cmdline 00:08:55.619 ************************************ 00:08:55.619 00:08:55.619 real 0m4.654s 00:08:55.619 user 0m5.069s 00:08:55.619 sys 0m0.724s 00:08:55.619 11:33:54 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.619 11:33:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:55.619 11:33:54 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:55.619 11:33:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.619 11:33:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.619 11:33:54 -- common/autotest_common.sh@10 -- # set +x 00:08:55.619 ************************************ 00:08:55.619 START TEST version 00:08:55.619 ************************************ 00:08:55.619 11:33:54 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:55.619 * Looking for test storage... 00:08:55.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:55.620 11:33:54 version -- app/version.sh@17 -- # get_header_version major 00:08:55.620 11:33:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:55.620 11:33:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:55.620 11:33:54 version -- app/version.sh@14 -- # cut -f2 00:08:55.620 11:33:54 version -- app/version.sh@17 -- # major=24 00:08:55.620 11:33:54 version -- app/version.sh@18 -- # get_header_version minor 00:08:55.620 11:33:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:55.620 11:33:54 version -- app/version.sh@14 -- # cut -f2 00:08:55.620 11:33:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:55.620 11:33:54 version -- app/version.sh@18 -- # minor=9 00:08:55.620 11:33:54 version -- app/version.sh@19 -- # get_header_version patch 00:08:55.620 11:33:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:55.620 11:33:54 version -- app/version.sh@14 -- # cut -f2 00:08:55.620 11:33:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:55.620 11:33:54 version -- app/version.sh@19 -- # patch=0 00:08:55.620 11:33:54 version -- app/version.sh@20 -- # get_header_version suffix 00:08:55.620 11:33:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:55.620 11:33:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:55.620 11:33:54 version -- app/version.sh@14 -- # cut -f2 00:08:55.620 11:33:54 version -- app/version.sh@20 -- # suffix=-pre 00:08:55.620 11:33:54 version -- app/version.sh@22 -- # version=24.9 00:08:55.620 11:33:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:55.620 11:33:54 version -- app/version.sh@28 -- # version=24.9rc0 00:08:55.620 11:33:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:55.620 11:33:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:55.620 11:33:54 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:55.620 11:33:54 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:55.620 ************************************ 00:08:55.620 END TEST version 00:08:55.620 ************************************ 00:08:55.620 00:08:55.620 real 0m0.147s 00:08:55.620 user 0m0.077s 00:08:55.620 sys 0m0.101s 00:08:55.620 11:33:54 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.620 11:33:54 version -- common/autotest_common.sh@10 -- # set +x 00:08:55.620 11:33:54 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:55.620 11:33:54 -- spdk/autotest.sh@202 -- # uname -s 00:08:55.620 11:33:54 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:55.620 11:33:54 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:55.620 11:33:54 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:55.620 11:33:54 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:08:55.620 11:33:54 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:55.620 11:33:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:55.620 11:33:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.620 11:33:54 -- common/autotest_common.sh@10 -- # set +x 00:08:55.620 ************************************ 00:08:55.620 START TEST blockdev_nvme 00:08:55.620 ************************************ 00:08:55.620 11:33:54 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:55.620 * Looking for test storage... 00:08:55.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:55.620 11:33:54 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:08:55.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=65366 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 65366 00:08:55.620 11:33:54 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 65366 ']' 00:08:55.620 11:33:54 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:55.620 11:33:54 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.620 11:33:54 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.620 11:33:54 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.620 11:33:54 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.620 11:33:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.879 [2024-07-25 11:33:54.709533] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:55.879 [2024-07-25 11:33:54.709994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65366 ] 00:08:55.879 [2024-07-25 11:33:54.889462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.137 [2024-07-25 11:33:55.136714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.071 11:33:55 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.071 11:33:55 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:08:57.071 11:33:55 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:57.071 11:33:55 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:08:57.071 11:33:55 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:57.071 11:33:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:57.071 11:33:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:57.071 11:33:56 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:57.071 11:33:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.071 11:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.329 11:33:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.329 11:33:56 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:57.329 11:33:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.329 11:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.329 11:33:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.329 11:33:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:08:57.329 11:33:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:57.329 11:33:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.329 11:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.329 11:33:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.329 11:33:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:57.329 11:33:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.329 11:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.587 11:33:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.587 11:33:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:57.587 11:33:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.587 11:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.587 11:33:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.587 11:33:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:57.587 11:33:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:57.587 11:33:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:57.587 11:33:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.587 11:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.587 11:33:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.587 11:33:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:57.587 11:33:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:57.588 11:33:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3b33c734-1dad-4dd9-b66d-8ed1234fcec8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3b33c734-1dad-4dd9-b66d-8ed1234fcec8",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "437e9b70-5ac9-4449-88f5-1efe3ec26a45"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "437e9b70-5ac9-4449-88f5-1efe3ec26a45",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "269f184a-beb3-430e-8d2f-e4d179927787"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "269f184a-beb3-430e-8d2f-e4d179927787",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4792b40d-a138-455f-9313-87ca83f54c34"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4792b40d-a138-455f-9313-87ca83f54c34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3ed387e9-b039-418f-8dd0-d2dc0acbbf4a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3ed387e9-b039-418f-8dd0-d2dc0acbbf4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8faec864-d613-4766-b289-f68f941952ef"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8faec864-d613-4766-b289-f68f941952ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:57.588 11:33:56 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:57.588 11:33:56 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:57.588 11:33:56 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:57.588 11:33:56 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 65366 00:08:57.588 11:33:56 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 65366 ']' 00:08:57.588 11:33:56 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 65366 00:08:57.588 11:33:56 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:08:57.588 11:33:56 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.588 11:33:56 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65366 00:08:57.588 killing process with pid 65366 00:08:57.588 11:33:56 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.588 11:33:56 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.588 11:33:56 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65366' 00:08:57.588 11:33:56 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 65366 00:08:57.588 11:33:56 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 65366 00:09:00.118 11:33:58 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:00.118 11:33:58 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:00.118 11:33:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:00.118 11:33:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.118 11:33:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 ************************************ 00:09:00.118 START TEST bdev_hello_world 00:09:00.118 ************************************ 00:09:00.118 11:33:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:00.118 [2024-07-25 11:33:58.962938] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:00.118 [2024-07-25 11:33:58.963127] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65461 ] 00:09:00.118 [2024-07-25 11:33:59.143989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.376 [2024-07-25 11:33:59.421791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.309 [2024-07-25 11:34:00.083156] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:01.309 [2024-07-25 11:34:00.083229] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:01.309 [2024-07-25 11:34:00.083261] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:01.309 [2024-07-25 11:34:00.086466] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:01.309 [2024-07-25 11:34:00.086845] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:01.309 [2024-07-25 11:34:00.086881] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:01.309 [2024-07-25 11:34:00.087156] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:01.309 00:09:01.309 [2024-07-25 11:34:00.087196] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:02.242 00:09:02.242 real 0m2.433s 00:09:02.242 user 0m2.027s 00:09:02.242 sys 0m0.294s 00:09:02.242 11:34:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.242 11:34:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:02.242 ************************************ 00:09:02.242 END TEST bdev_hello_world 00:09:02.242 ************************************ 00:09:02.501 11:34:01 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:02.501 11:34:01 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:02.501 11:34:01 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.501 11:34:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.501 ************************************ 00:09:02.501 START TEST bdev_bounds 00:09:02.501 ************************************ 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=65503 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:02.501 Process bdevio pid: 65503 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 65503' 00:09:02.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 65503 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 65503 ']' 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.501 11:34:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:02.501 [2024-07-25 11:34:01.448820] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:02.501 [2024-07-25 11:34:01.449296] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65503 ] 00:09:02.760 [2024-07-25 11:34:01.634118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:03.049 [2024-07-25 11:34:01.916889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.049 [2024-07-25 11:34:01.916987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.049 [2024-07-25 11:34:01.917016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.614 11:34:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.614 11:34:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:09:03.614 11:34:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:03.873 I/O targets: 00:09:03.873 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:03.873 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:03.873 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:03.873 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:03.873 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:03.873 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:03.873 00:09:03.873 00:09:03.873 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.873 http://cunit.sourceforge.net/ 00:09:03.873 00:09:03.873 00:09:03.873 Suite: bdevio tests on: Nvme3n1 00:09:03.873 Test: blockdev write read block ...passed 00:09:03.873 Test: blockdev write zeroes read block ...passed 00:09:03.873 Test: blockdev write zeroes read no split ...passed 00:09:03.873 Test: blockdev write zeroes read split ...passed 00:09:03.873 Test: blockdev write zeroes read split partial ...passed 00:09:03.873 Test: blockdev reset ...[2024-07-25 11:34:02.757107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:03.873 [2024-07-25 11:34:02.760947] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:03.873 passed 00:09:03.873 Test: blockdev write read 8 blocks ...passed 00:09:03.873 Test: blockdev write read size > 128k ...passed 00:09:03.873 Test: blockdev write read invalid size ...passed 00:09:03.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:03.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:03.873 Test: blockdev write read max offset ...passed 00:09:03.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:03.873 Test: blockdev writev readv 8 blocks ...passed 00:09:03.873 Test: blockdev writev readv 30 x 1block ...passed 00:09:03.873 Test: blockdev writev readv block ...passed 00:09:03.873 Test: blockdev writev readv size > 128k ...passed 00:09:03.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:03.873 Test: blockdev comparev and writev ...[2024-07-25 11:34:02.770957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27840a000 len:0x1000 00:09:03.873 [2024-07-25 11:34:02.771019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:03.873 passed 00:09:03.873 Test: blockdev nvme passthru rw ...passed 00:09:03.873 Test: blockdev nvme passthru vendor specific ...passed 00:09:03.873 Test: blockdev nvme admin passthru ...[2024-07-25 11:34:02.771750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:03.873 [2024-07-25 11:34:02.771800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:03.873 passed 00:09:03.873 Test: blockdev copy ...passed 00:09:03.873 Suite: bdevio tests on: Nvme2n3 00:09:03.873 Test: blockdev write read block ...passed 00:09:03.873 Test: blockdev write zeroes read block ...passed 00:09:03.873 Test: blockdev write zeroes read no split ...passed 00:09:03.873 Test: blockdev write zeroes read split ...passed 00:09:03.873 Test: blockdev write zeroes read split partial ...passed 00:09:03.873 Test: blockdev reset ...[2024-07-25 11:34:02.833190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:03.873 [2024-07-25 11:34:02.837583] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:03.873 passed 00:09:03.873 Test: blockdev write read 8 blocks ...passed 00:09:03.873 Test: blockdev write read size > 128k ...passed 00:09:03.873 Test: blockdev write read invalid size ...passed 00:09:03.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:03.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:03.873 Test: blockdev write read max offset ...passed 00:09:03.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:03.873 Test: blockdev writev readv 8 blocks ...passed 00:09:03.873 Test: blockdev writev readv 30 x 1block ...passed 00:09:03.873 Test: blockdev writev readv block ...passed 00:09:03.873 Test: blockdev writev readv size > 128k ...passed 00:09:03.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:03.873 Test: blockdev comparev and writev ...[2024-07-25 11:34:02.845567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x25aa04000 len:0x1000 00:09:03.873 [2024-07-25 11:34:02.845627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:03.873 passed 00:09:03.873 Test: blockdev nvme passthru rw ...passed 00:09:03.873 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:34:02.846277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:03.873 passed 00:09:03.873 Test: blockdev nvme admin passthru ...[2024-07-25 11:34:02.846320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:03.873 passed 00:09:03.873 Test: blockdev copy ...passed 00:09:03.873 Suite: bdevio tests on: Nvme2n2 00:09:03.873 Test: blockdev write read block ...passed 00:09:03.873 Test: blockdev write zeroes read block ...passed 00:09:03.873 Test: blockdev write zeroes read no split ...passed 00:09:03.873 Test: blockdev write zeroes read split ...passed 00:09:03.873 Test: blockdev write zeroes read split partial ...passed 00:09:03.873 Test: blockdev reset ...[2024-07-25 11:34:02.910861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:03.873 [2024-07-25 11:34:02.915118] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:03.873 passed 00:09:03.873 Test: blockdev write read 8 blocks ...passed 00:09:03.873 Test: blockdev write read size > 128k ...passed 00:09:03.873 Test: blockdev write read invalid size ...passed 00:09:03.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:03.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:03.873 Test: blockdev write read max offset ...passed 00:09:03.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:03.873 Test: blockdev writev readv 8 blocks ...passed 00:09:03.873 Test: blockdev writev readv 30 x 1block ...passed 00:09:03.873 Test: blockdev writev readv block ...passed 00:09:03.873 Test: blockdev writev readv size > 128k ...passed 00:09:03.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:03.873 Test: blockdev comparev and writev ...[2024-07-25 11:34:02.923254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28a43a000 len:0x1000 00:09:03.873 [2024-07-25 11:34:02.923312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:03.873 passed 00:09:04.132 Test: blockdev nvme passthru rw ...passed 00:09:04.132 Test: blockdev nvme passthru vendor specific ...passed 00:09:04.132 Test: blockdev nvme admin passthru ...[2024-07-25 11:34:02.924110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:04.132 [2024-07-25 11:34:02.924160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:04.132 passed 00:09:04.132 Test: blockdev copy ...passed 00:09:04.132 Suite: bdevio tests on: Nvme2n1 00:09:04.132 Test: blockdev write read block ...passed 00:09:04.132 Test: blockdev write zeroes read block ...passed 00:09:04.132 Test: blockdev write zeroes read no split ...passed 00:09:04.132 Test: blockdev write zeroes read split ...passed 00:09:04.132 Test: blockdev write zeroes read split partial ...passed 00:09:04.132 Test: blockdev reset ...[2024-07-25 11:34:02.988624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:04.132 [2024-07-25 11:34:02.992719] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:04.132 passed 00:09:04.132 Test: blockdev write read 8 blocks ...passed 00:09:04.132 Test: blockdev write read size > 128k ...passed 00:09:04.132 Test: blockdev write read invalid size ...passed 00:09:04.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:04.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:04.132 Test: blockdev write read max offset ...passed 00:09:04.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:04.132 Test: blockdev writev readv 8 blocks ...passed 00:09:04.132 Test: blockdev writev readv 30 x 1block ...passed 00:09:04.132 Test: blockdev writev readv block ...passed 00:09:04.132 Test: blockdev writev readv size > 128k ...passed 00:09:04.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:04.132 Test: blockdev comparev and writev ...[2024-07-25 11:34:03.001272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28a434000 len:0x1000 00:09:04.132 [2024-07-25 11:34:03.001334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:04.132 passed 00:09:04.132 Test: blockdev nvme passthru rw ...passed 00:09:04.132 Test: blockdev nvme passthru vendor specific ...passed 00:09:04.132 Test: blockdev nvme admin passthru ...[2024-07-25 11:34:03.002146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:04.132 [2024-07-25 11:34:03.002197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:04.132 passed 00:09:04.132 Test: blockdev copy ...passed 00:09:04.132 Suite: bdevio tests on: Nvme1n1 00:09:04.132 Test: blockdev write read block ...passed 00:09:04.132 Test: blockdev write zeroes read block ...passed 00:09:04.132 Test: blockdev write zeroes read no split ...passed 00:09:04.132 Test: blockdev write zeroes read split ...passed 00:09:04.132 Test: blockdev write zeroes read split partial ...passed 00:09:04.132 Test: blockdev reset ...[2024-07-25 11:34:03.064054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:04.132 [2024-07-25 11:34:03.067745] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:04.132 passed 00:09:04.132 Test: blockdev write read 8 blocks ...passed 00:09:04.132 Test: blockdev write read size > 128k ...passed 00:09:04.132 Test: blockdev write read invalid size ...passed 00:09:04.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:04.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:04.132 Test: blockdev write read max offset ...passed 00:09:04.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:04.132 Test: blockdev writev readv 8 blocks ...passed 00:09:04.132 Test: blockdev writev readv 30 x 1block ...passed 00:09:04.132 Test: blockdev writev readv block ...passed 00:09:04.132 Test: blockdev writev readv size > 128k ...passed 00:09:04.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:04.132 Test: blockdev comparev and writev ...[2024-07-25 11:34:03.075827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28a430000 len:0x1000 00:09:04.132 [2024-07-25 11:34:03.075890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:04.132 passed 00:09:04.132 Test: blockdev nvme passthru rw ...passed 00:09:04.132 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:34:03.076702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:04.132 [2024-07-25 11:34:03.076756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:04.132 passed 00:09:04.132 Test: blockdev nvme admin passthru ...passed 00:09:04.132 Test: blockdev copy ...passed 00:09:04.132 Suite: bdevio tests on: Nvme0n1 00:09:04.132 Test: blockdev write read block ...passed 00:09:04.132 Test: blockdev write zeroes read block ...passed 00:09:04.132 Test: blockdev write zeroes read no split ...passed 00:09:04.132 Test: blockdev write zeroes read split ...passed 00:09:04.132 Test: blockdev write zeroes read split partial ...passed 00:09:04.132 Test: blockdev reset ...[2024-07-25 11:34:03.139542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:04.132 [2024-07-25 11:34:03.143352] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:04.132 passed 00:09:04.132 Test: blockdev write read 8 blocks ...passed 00:09:04.132 Test: blockdev write read size > 128k ...passed 00:09:04.132 Test: blockdev write read invalid size ...passed 00:09:04.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:04.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:04.132 Test: blockdev write read max offset ...passed 00:09:04.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:04.132 Test: blockdev writev readv 8 blocks ...passed 00:09:04.132 Test: blockdev writev readv 30 x 1block ...passed 00:09:04.132 Test: blockdev writev readv block ...passed 00:09:04.132 Test: blockdev writev readv size > 128k ...passed 00:09:04.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:04.132 Test: blockdev comparev and writev ...passed 00:09:04.132 Test: blockdev nvme passthru rw ...[2024-07-25 11:34:03.151621] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:04.132 separate metadata which is not supported yet. 00:09:04.132 passed 00:09:04.132 Test: blockdev nvme passthru vendor specific ...passed 00:09:04.132 Test: blockdev nvme admin passthru ...[2024-07-25 11:34:03.152201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:04.133 [2024-07-25 11:34:03.152264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:04.133 passed 00:09:04.133 Test: blockdev copy ...passed 00:09:04.133 00:09:04.133 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.133 suites 6 6 n/a 0 0 00:09:04.133 tests 138 138 138 0 0 00:09:04.133 asserts 893 893 893 0 n/a 00:09:04.133 00:09:04.133 Elapsed time = 1.230 seconds 00:09:04.133 0 00:09:04.133 11:34:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 65503 00:09:04.391 11:34:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 65503 ']' 00:09:04.391 11:34:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 65503 00:09:04.391 11:34:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:09:04.391 11:34:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.391 11:34:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65503 00:09:04.391 11:34:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.391 11:34:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.392 11:34:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65503' 00:09:04.392 killing process with pid 65503 00:09:04.392 11:34:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 65503 00:09:04.392 11:34:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 65503 00:09:05.327 11:34:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:05.327 00:09:05.327 real 0m2.888s 00:09:05.327 user 0m6.885s 00:09:05.327 sys 0m0.443s 00:09:05.327 11:34:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.327 11:34:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:05.327 ************************************ 00:09:05.327 END TEST bdev_bounds 00:09:05.327 ************************************ 00:09:05.327 11:34:04 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:05.327 11:34:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:05.327 11:34:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.327 11:34:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.327 ************************************ 00:09:05.327 START TEST bdev_nbd 00:09:05.327 ************************************ 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=65568 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 65568 /var/tmp/spdk-nbd.sock 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:05.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 65568 ']' 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:05.327 11:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.328 11:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:05.328 11:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.328 11:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:05.585 [2024-07-25 11:34:04.405305] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:05.585 [2024-07-25 11:34:04.405509] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.585 [2024-07-25 11:34:04.581347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.843 [2024-07-25 11:34:04.825625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:06.776 1+0 records in 00:09:06.776 1+0 records out 00:09:06.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518191 s, 7.9 MB/s 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:06.776 11:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:07.060 1+0 records in 00:09:07.060 1+0 records out 00:09:07.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491108 s, 8.3 MB/s 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:07.060 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:07.318 1+0 records in 00:09:07.318 1+0 records out 00:09:07.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813977 s, 5.0 MB/s 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:07.318 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:07.319 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:07.885 1+0 records in 00:09:07.885 1+0 records out 00:09:07.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508075 s, 8.1 MB/s 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:07.885 1+0 records in 00:09:07.885 1+0 records out 00:09:07.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663563 s, 6.2 MB/s 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:07.885 11:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.452 1+0 records in 00:09:08.452 1+0 records out 00:09:08.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135551 s, 3.0 MB/s 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:08.452 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd0", 00:09:08.710 "bdev_name": "Nvme0n1" 00:09:08.710 }, 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd1", 00:09:08.710 "bdev_name": "Nvme1n1" 00:09:08.710 }, 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd2", 00:09:08.710 "bdev_name": "Nvme2n1" 00:09:08.710 }, 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd3", 00:09:08.710 "bdev_name": "Nvme2n2" 00:09:08.710 }, 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd4", 00:09:08.710 "bdev_name": "Nvme2n3" 00:09:08.710 }, 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd5", 00:09:08.710 "bdev_name": "Nvme3n1" 00:09:08.710 } 00:09:08.710 ]' 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd0", 00:09:08.710 "bdev_name": "Nvme0n1" 00:09:08.710 }, 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd1", 00:09:08.710 "bdev_name": "Nvme1n1" 00:09:08.710 }, 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd2", 00:09:08.710 "bdev_name": "Nvme2n1" 00:09:08.710 }, 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd3", 00:09:08.710 "bdev_name": "Nvme2n2" 00:09:08.710 }, 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd4", 00:09:08.710 "bdev_name": "Nvme2n3" 00:09:08.710 }, 00:09:08.710 { 00:09:08.710 "nbd_device": "/dev/nbd5", 00:09:08.710 "bdev_name": "Nvme3n1" 00:09:08.710 } 00:09:08.710 ]' 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.710 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:08.969 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:08.969 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:08.969 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:08.969 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.969 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.969 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:08.969 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:08.969 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.969 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.969 11:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:09.228 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:09.228 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:09.228 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:09.228 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.228 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.228 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:09.228 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:09.228 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.228 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.228 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:09.486 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:09.486 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:09.486 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:09.486 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.486 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.486 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:09.486 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:09.486 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.486 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.486 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:09.743 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:09.743 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:09.743 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:09.743 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.743 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.743 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:09.743 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:09.743 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.743 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.743 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:10.002 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:10.002 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:10.002 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:10.002 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.002 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.002 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:10.002 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:10.002 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.002 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.002 11:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.260 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:10.518 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:10.518 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:10.518 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:10.518 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:10.518 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:10.518 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:10.518 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:10.518 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:10.518 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:10.776 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:11.034 /dev/nbd0 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.034 1+0 records in 00:09:11.034 1+0 records out 00:09:11.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588548 s, 7.0 MB/s 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:11.034 11:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:11.293 /dev/nbd1 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.293 1+0 records in 00:09:11.293 1+0 records out 00:09:11.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759258 s, 5.4 MB/s 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:11.293 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:11.551 /dev/nbd10 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.551 1+0 records in 00:09:11.551 1+0 records out 00:09:11.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449732 s, 9.1 MB/s 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:11.551 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:11.810 /dev/nbd11 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.810 1+0 records in 00:09:11.810 1+0 records out 00:09:11.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677873 s, 6.0 MB/s 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:11.810 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:12.070 /dev/nbd12 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.070 1+0 records in 00:09:12.070 1+0 records out 00:09:12.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604237 s, 6.8 MB/s 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:12.070 11:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:12.383 /dev/nbd13 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.383 1+0 records in 00:09:12.383 1+0 records out 00:09:12.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524649 s, 7.8 MB/s 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.383 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd0", 00:09:12.642 "bdev_name": "Nvme0n1" 00:09:12.642 }, 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd1", 00:09:12.642 "bdev_name": "Nvme1n1" 00:09:12.642 }, 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd10", 00:09:12.642 "bdev_name": "Nvme2n1" 00:09:12.642 }, 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd11", 00:09:12.642 "bdev_name": "Nvme2n2" 00:09:12.642 }, 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd12", 00:09:12.642 "bdev_name": "Nvme2n3" 00:09:12.642 }, 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd13", 00:09:12.642 "bdev_name": "Nvme3n1" 00:09:12.642 } 00:09:12.642 ]' 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd0", 00:09:12.642 "bdev_name": "Nvme0n1" 00:09:12.642 }, 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd1", 00:09:12.642 "bdev_name": "Nvme1n1" 00:09:12.642 }, 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd10", 00:09:12.642 "bdev_name": "Nvme2n1" 00:09:12.642 }, 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd11", 00:09:12.642 "bdev_name": "Nvme2n2" 00:09:12.642 }, 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd12", 00:09:12.642 "bdev_name": "Nvme2n3" 00:09:12.642 }, 00:09:12.642 { 00:09:12.642 "nbd_device": "/dev/nbd13", 00:09:12.642 "bdev_name": "Nvme3n1" 00:09:12.642 } 00:09:12.642 ]' 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:12.642 /dev/nbd1 00:09:12.642 /dev/nbd10 00:09:12.642 /dev/nbd11 00:09:12.642 /dev/nbd12 00:09:12.642 /dev/nbd13' 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:12.642 /dev/nbd1 00:09:12.642 /dev/nbd10 00:09:12.642 /dev/nbd11 00:09:12.642 /dev/nbd12 00:09:12.642 /dev/nbd13' 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:12.642 256+0 records in 00:09:12.642 256+0 records out 00:09:12.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125838 s, 83.3 MB/s 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.642 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:12.900 256+0 records in 00:09:12.900 256+0 records out 00:09:12.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153273 s, 6.8 MB/s 00:09:12.900 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.900 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:12.900 256+0 records in 00:09:12.900 256+0 records out 00:09:12.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127751 s, 8.2 MB/s 00:09:12.900 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.900 11:34:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:13.158 256+0 records in 00:09:13.158 256+0 records out 00:09:13.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157119 s, 6.7 MB/s 00:09:13.158 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.158 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:13.158 256+0 records in 00:09:13.158 256+0 records out 00:09:13.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154183 s, 6.8 MB/s 00:09:13.158 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.158 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:13.416 256+0 records in 00:09:13.416 256+0 records out 00:09:13.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162829 s, 6.4 MB/s 00:09:13.416 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.416 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:13.675 256+0 records in 00:09:13.675 256+0 records out 00:09:13.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141982 s, 7.4 MB/s 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.675 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:13.933 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:13.933 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:13.933 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:13.933 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.933 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.933 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:13.933 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:13.933 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.933 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.933 11:34:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:14.191 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:14.191 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:14.191 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:14.191 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.191 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.191 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:14.191 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:14.191 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.191 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.191 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:14.449 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:14.449 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:14.449 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:14.449 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.449 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.449 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:14.449 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:14.449 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.449 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.449 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:14.707 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:14.707 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:14.707 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:14.707 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.707 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.707 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:14.707 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:14.707 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.707 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.707 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:14.965 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:14.965 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:14.965 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:14.965 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.965 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.965 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:14.965 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:14.965 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.965 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.965 11:34:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.531 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:15.789 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:15.790 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:16.048 malloc_lvol_verify 00:09:16.048 11:34:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:16.305 8e3f4207-0d60-4fc2-b276-6fdf1daae8eb 00:09:16.306 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:16.562 904de049-f4ca-434e-aa8e-370b684843ba 00:09:16.563 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:16.820 /dev/nbd0 00:09:16.820 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:16.820 mke2fs 1.46.5 (30-Dec-2021) 00:09:16.820 Discarding device blocks: 0/4096 done 00:09:16.820 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:16.820 00:09:16.820 Allocating group tables: 0/1 done 00:09:16.820 Writing inode tables: 0/1 done 00:09:16.820 Creating journal (1024 blocks): done 00:09:16.820 Writing superblocks and filesystem accounting information: 0/1 done 00:09:16.820 00:09:16.820 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:16.820 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:16.820 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.820 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:16.820 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:16.820 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:16.820 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.820 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 65568 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 65568 ']' 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 65568 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.078 11:34:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65568 00:09:17.078 11:34:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.078 11:34:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.078 killing process with pid 65568 00:09:17.078 11:34:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65568' 00:09:17.078 11:34:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 65568 00:09:17.078 11:34:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 65568 00:09:18.453 ************************************ 00:09:18.453 END TEST bdev_nbd 00:09:18.453 ************************************ 00:09:18.453 11:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:18.453 00:09:18.453 real 0m13.038s 00:09:18.453 user 0m18.448s 00:09:18.453 sys 0m4.114s 00:09:18.453 11:34:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.453 11:34:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:18.453 11:34:17 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:18.453 11:34:17 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:09:18.453 skipping fio tests on NVMe due to multi-ns failures. 00:09:18.453 11:34:17 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:18.453 11:34:17 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:18.453 11:34:17 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:18.453 11:34:17 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:09:18.453 11:34:17 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.453 11:34:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.453 ************************************ 00:09:18.453 START TEST bdev_verify 00:09:18.453 ************************************ 00:09:18.453 11:34:17 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:18.453 [2024-07-25 11:34:17.489770] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:18.453 [2024-07-25 11:34:17.489975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65975 ] 00:09:18.712 [2024-07-25 11:34:17.669797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.970 [2024-07-25 11:34:17.912249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.970 [2024-07-25 11:34:17.912265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.905 Running I/O for 5 seconds... 00:09:25.170 00:09:25.170 Latency(us) 00:09:25.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.170 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0x0 length 0xbd0bd 00:09:25.170 Nvme0n1 : 5.07 1515.19 5.92 0.00 0.00 84270.90 17515.99 75783.45 00:09:25.170 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:25.170 Nvme0n1 : 5.08 1513.10 5.91 0.00 0.00 84393.50 14477.50 98661.47 00:09:25.170 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0x0 length 0xa0000 00:09:25.170 Nvme1n1 : 5.07 1514.54 5.92 0.00 0.00 84134.94 17635.14 70540.57 00:09:25.170 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0xa0000 length 0xa0000 00:09:25.170 Nvme1n1 : 5.08 1512.15 5.91 0.00 0.00 84264.91 15728.64 93418.59 00:09:25.170 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0x0 length 0x80000 00:09:25.170 Nvme2n1 : 5.07 1513.88 5.91 0.00 0.00 84012.40 17277.67 67204.19 00:09:25.170 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0x80000 length 0x80000 00:09:25.170 Nvme2n1 : 5.08 1511.15 5.90 0.00 0.00 84079.95 17158.52 94371.84 00:09:25.170 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0x0 length 0x80000 00:09:25.170 Nvme2n2 : 5.08 1513.23 5.91 0.00 0.00 83878.03 17277.67 69587.32 00:09:25.170 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0x80000 length 0x80000 00:09:25.170 Nvme2n2 : 5.09 1510.14 5.90 0.00 0.00 83940.45 18350.08 97708.22 00:09:25.170 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0x0 length 0x80000 00:09:25.170 Nvme2n3 : 5.08 1512.26 5.91 0.00 0.00 83737.89 18230.92 72923.69 00:09:25.170 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0x80000 length 0x80000 00:09:25.170 Nvme2n3 : 5.09 1509.56 5.90 0.00 0.00 83788.64 17158.52 99614.72 00:09:25.170 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0x0 length 0x20000 00:09:25.170 Nvme3n1 : 5.08 1511.27 5.90 0.00 0.00 83602.73 10009.13 76260.07 00:09:25.170 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.170 Verification LBA range: start 0x20000 length 0x20000 00:09:25.170 Nvme3n1 : 5.09 1508.98 5.89 0.00 0.00 83653.26 12332.68 99614.72 00:09:25.170 =================================================================================================================== 00:09:25.170 Total : 18145.45 70.88 0.00 0.00 83979.80 10009.13 99614.72 00:09:26.544 00:09:26.544 real 0m7.974s 00:09:26.544 user 0m14.214s 00:09:26.544 sys 0m0.351s 00:09:26.544 11:34:25 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.544 11:34:25 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:26.544 ************************************ 00:09:26.544 END TEST bdev_verify 00:09:26.544 ************************************ 00:09:26.544 11:34:25 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:26.544 11:34:25 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:09:26.544 11:34:25 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.544 11:34:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.544 ************************************ 00:09:26.544 START TEST bdev_verify_big_io 00:09:26.544 ************************************ 00:09:26.544 11:34:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:26.544 [2024-07-25 11:34:25.520947] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:26.544 [2024-07-25 11:34:25.521139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66083 ] 00:09:26.802 [2024-07-25 11:34:25.699618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.061 [2024-07-25 11:34:25.950514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.061 [2024-07-25 11:34:25.950531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.995 Running I/O for 5 seconds... 00:09:34.546 00:09:34.546 Latency(us) 00:09:34.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.546 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0x0 length 0xbd0b 00:09:34.546 Nvme0n1 : 5.64 124.89 7.81 0.00 0.00 990299.82 20733.21 1044763.00 00:09:34.546 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:34.546 Nvme0n1 : 5.67 124.26 7.77 0.00 0.00 987377.27 26095.24 1037136.99 00:09:34.546 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0x0 length 0xa000 00:09:34.546 Nvme1n1 : 5.83 128.56 8.04 0.00 0.00 932638.61 90082.21 884616.84 00:09:34.546 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0xa000 length 0xa000 00:09:34.546 Nvme1n1 : 5.83 128.50 8.03 0.00 0.00 935829.42 58148.31 896055.85 00:09:34.546 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0x0 length 0x8000 00:09:34.546 Nvme2n1 : 5.83 128.00 8.00 0.00 0.00 907572.17 90558.84 850299.81 00:09:34.546 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0x8000 length 0x8000 00:09:34.546 Nvme2n1 : 5.83 128.10 8.01 0.00 0.00 906082.06 58624.93 968502.92 00:09:34.546 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0x0 length 0x8000 00:09:34.546 Nvme2n2 : 5.83 131.69 8.23 0.00 0.00 861634.25 96278.34 873177.83 00:09:34.546 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0x8000 length 0x8000 00:09:34.546 Nvme2n2 : 5.83 131.64 8.23 0.00 0.00 860250.30 96754.97 991380.95 00:09:34.546 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0x0 length 0x8000 00:09:34.546 Nvme2n3 : 5.90 137.49 8.59 0.00 0.00 803572.37 25856.93 1433689.37 00:09:34.546 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0x8000 length 0x8000 00:09:34.546 Nvme2n3 : 5.91 140.87 8.80 0.00 0.00 784086.32 19184.17 1006632.96 00:09:34.546 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0x0 length 0x2000 00:09:34.546 Nvme3n1 : 5.91 141.73 8.86 0.00 0.00 755739.74 2398.02 1952257.86 00:09:34.546 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.546 Verification LBA range: start 0x2000 length 0x2000 00:09:34.546 Nvme3n1 : 5.92 151.36 9.46 0.00 0.00 710577.00 2457.60 1021884.97 00:09:34.546 =================================================================================================================== 00:09:34.546 Total : 1597.09 99.82 0.00 0.00 863628.72 2398.02 1952257.86 00:09:35.917 00:09:35.917 real 0m9.260s 00:09:35.917 user 0m16.924s 00:09:35.917 sys 0m0.390s 00:09:35.917 11:34:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.917 ************************************ 00:09:35.917 END TEST bdev_verify_big_io 00:09:35.917 ************************************ 00:09:35.917 11:34:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:35.917 11:34:34 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:35.917 11:34:34 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:35.917 11:34:34 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.917 11:34:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.917 ************************************ 00:09:35.917 START TEST bdev_write_zeroes 00:09:35.917 ************************************ 00:09:35.917 11:34:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:35.917 [2024-07-25 11:34:34.826981] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:35.917 [2024-07-25 11:34:34.827173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66199 ] 00:09:36.175 [2024-07-25 11:34:34.996352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.433 [2024-07-25 11:34:35.235255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.998 Running I/O for 1 seconds... 00:09:37.932 00:09:37.932 Latency(us) 00:09:37.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.932 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.932 Nvme0n1 : 1.02 7235.90 28.27 0.00 0.00 17643.54 10247.45 25856.93 00:09:37.932 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.932 Nvme1n1 : 1.02 7224.58 28.22 0.00 0.00 17641.66 10724.07 25618.62 00:09:37.932 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.932 Nvme2n1 : 1.02 7213.35 28.18 0.00 0.00 17561.46 10366.60 21805.61 00:09:37.932 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.932 Nvme2n2 : 1.02 7202.26 28.13 0.00 0.00 17552.68 10724.07 22163.08 00:09:37.932 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.932 Nvme2n3 : 1.02 7191.14 28.09 0.00 0.00 17539.91 10604.92 22043.93 00:09:37.932 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.932 Nvme3n1 : 1.03 7180.12 28.05 0.00 0.00 17528.16 9770.82 22520.55 00:09:37.932 =================================================================================================================== 00:09:37.932 Total : 43247.35 168.93 0.00 0.00 17577.90 9770.82 25856.93 00:09:39.317 00:09:39.317 real 0m3.429s 00:09:39.317 user 0m3.005s 00:09:39.317 sys 0m0.299s 00:09:39.317 11:34:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.317 11:34:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 ************************************ 00:09:39.317 END TEST bdev_write_zeroes 00:09:39.317 ************************************ 00:09:39.317 11:34:38 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:39.317 11:34:38 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:39.317 11:34:38 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.317 11:34:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 ************************************ 00:09:39.317 START TEST bdev_json_nonenclosed 00:09:39.317 ************************************ 00:09:39.317 11:34:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:39.317 [2024-07-25 11:34:38.329761] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.317 [2024-07-25 11:34:38.329991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66252 ] 00:09:39.576 [2024-07-25 11:34:38.512012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.835 [2024-07-25 11:34:38.762256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.835 [2024-07-25 11:34:38.762401] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:39.835 [2024-07-25 11:34:38.762436] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:39.835 [2024-07-25 11:34:38.762455] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.401 00:09:40.401 real 0m1.001s 00:09:40.401 user 0m0.709s 00:09:40.401 sys 0m0.183s 00:09:40.401 11:34:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.401 11:34:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:40.401 ************************************ 00:09:40.401 END TEST bdev_json_nonenclosed 00:09:40.401 ************************************ 00:09:40.401 11:34:39 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:40.401 11:34:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:40.401 11:34:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.401 11:34:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:40.401 ************************************ 00:09:40.401 START TEST bdev_json_nonarray 00:09:40.401 ************************************ 00:09:40.401 11:34:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:40.401 [2024-07-25 11:34:39.384235] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:40.401 [2024-07-25 11:34:39.384618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66283 ] 00:09:40.659 [2024-07-25 11:34:39.565072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.917 [2024-07-25 11:34:39.859779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.917 [2024-07-25 11:34:39.859936] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:40.917 [2024-07-25 11:34:39.859975] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:40.917 [2024-07-25 11:34:39.859995] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:41.483 00:09:41.483 real 0m1.048s 00:09:41.483 user 0m0.771s 00:09:41.483 sys 0m0.169s 00:09:41.483 11:34:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.483 ************************************ 00:09:41.483 END TEST bdev_json_nonarray 00:09:41.483 ************************************ 00:09:41.483 11:34:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:41.483 11:34:40 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:09:41.483 11:34:40 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:09:41.483 11:34:40 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:09:41.483 11:34:40 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:41.483 11:34:40 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:09:41.484 11:34:40 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:41.484 11:34:40 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:41.484 11:34:40 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:41.484 11:34:40 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:41.484 11:34:40 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:41.484 11:34:40 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:41.484 ************************************ 00:09:41.484 END TEST blockdev_nvme 00:09:41.484 ************************************ 00:09:41.484 00:09:41.484 real 0m45.895s 00:09:41.484 user 1m7.520s 00:09:41.484 sys 0m7.248s 00:09:41.484 11:34:40 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.484 11:34:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.484 11:34:40 -- spdk/autotest.sh@217 -- # uname -s 00:09:41.484 11:34:40 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:09:41.484 11:34:40 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:41.484 11:34:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:41.484 11:34:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.484 11:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.484 ************************************ 00:09:41.484 START TEST blockdev_nvme_gpt 00:09:41.484 ************************************ 00:09:41.484 11:34:40 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:41.484 * Looking for test storage... 00:09:41.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66364 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 66364 00:09:41.484 11:34:40 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 66364 ']' 00:09:41.484 11:34:40 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.484 11:34:40 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.484 11:34:40 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.484 11:34:40 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.484 11:34:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:41.484 11:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:41.741 [2024-07-25 11:34:40.653271] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:41.741 [2024-07-25 11:34:40.654007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66364 ] 00:09:41.999 [2024-07-25 11:34:40.831222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.257 [2024-07-25 11:34:41.119235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.191 11:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.191 11:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:09:43.191 11:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:09:43.191 11:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:09:43.191 11:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:43.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.449 Waiting for block devices as requested 00:09:43.706 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.706 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.706 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.963 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.239 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:49.239 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:09:49.239 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:09:49.239 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:09:49.239 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:49.239 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:09:49.239 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:09:49.239 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:09:49.239 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:49.239 BYT; 00:09:49.240 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:49.240 BYT; 00:09:49.240 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:49.240 11:34:47 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:49.240 11:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:50.174 The operation has completed successfully. 00:09:50.174 11:34:48 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:51.108 The operation has completed successfully. 00:09:51.108 11:34:49 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:51.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.241 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.241 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.241 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.241 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.241 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:52.241 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.241 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:52.241 [] 00:09:52.241 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.241 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:52.241 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:52.241 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:52.241 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:52.499 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:52.499 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.499 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.758 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.758 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:09:52.758 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.758 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.758 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.758 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:52.758 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:52.758 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:52.758 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.758 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:52.758 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:52.759 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0f19311f-24e4-418a-815c-c6ce1b3e226c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0f19311f-24e4-418a-815c-c6ce1b3e226c",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7bbbd89b-f2da-4906-ba5e-1352d018636f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7bbbd89b-f2da-4906-ba5e-1352d018636f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "48346571-d48f-4526-a030-e9fbe5890b71"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "48346571-d48f-4526-a030-e9fbe5890b71",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "857235b4-2967-4092-8b9c-1edd5e18aab3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "857235b4-2967-4092-8b9c-1edd5e18aab3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "21b1fad3-7677-4fb2-a623-a112f299bfad"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "21b1fad3-7677-4fb2-a623-a112f299bfad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:53.041 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:53.041 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:09:53.041 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:53.041 11:34:51 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 66364 00:09:53.041 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 66364 ']' 00:09:53.041 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 66364 00:09:53.041 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:09:53.041 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.041 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66364 00:09:53.041 killing process with pid 66364 00:09:53.042 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.042 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.042 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66364' 00:09:53.042 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 66364 00:09:53.042 11:34:51 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 66364 00:09:55.579 11:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:55.579 11:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:55.579 11:34:54 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:55.579 11:34:54 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.580 11:34:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:55.580 ************************************ 00:09:55.580 START TEST bdev_hello_world 00:09:55.580 ************************************ 00:09:55.580 11:34:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:55.580 [2024-07-25 11:34:54.221432] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:55.580 [2024-07-25 11:34:54.221623] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67002 ] 00:09:55.580 [2024-07-25 11:34:54.399197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.837 [2024-07-25 11:34:54.675544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.402 [2024-07-25 11:34:55.330109] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:56.402 [2024-07-25 11:34:55.330193] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:56.402 [2024-07-25 11:34:55.330237] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:56.402 [2024-07-25 11:34:55.333783] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:56.402 [2024-07-25 11:34:55.334265] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:56.402 [2024-07-25 11:34:55.334301] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:56.402 [2024-07-25 11:34:55.334692] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:56.402 00:09:56.402 [2024-07-25 11:34:55.334749] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:57.774 ************************************ 00:09:57.774 END TEST bdev_hello_world 00:09:57.774 ************************************ 00:09:57.774 00:09:57.774 real 0m2.498s 00:09:57.774 user 0m2.073s 00:09:57.774 sys 0m0.311s 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:57.774 11:34:56 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:57.774 11:34:56 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.774 11:34:56 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.774 11:34:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:57.774 ************************************ 00:09:57.774 START TEST bdev_bounds 00:09:57.774 ************************************ 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=67050 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:57.774 Process bdevio pid: 67050 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 67050' 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 67050 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 67050 ']' 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.774 11:34:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:57.774 [2024-07-25 11:34:56.750899] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:57.774 [2024-07-25 11:34:56.751086] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67050 ] 00:09:58.032 [2024-07-25 11:34:56.920616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:58.290 [2024-07-25 11:34:57.201169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.290 [2024-07-25 11:34:57.201315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.290 [2024-07-25 11:34:57.201332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.856 11:34:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.856 11:34:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:09:58.856 11:34:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:59.115 I/O targets: 00:09:59.115 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:59.115 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:59.115 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:59.115 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:59.115 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:59.115 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:59.115 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:59.115 00:09:59.115 00:09:59.115 CUnit - A unit testing framework for C - Version 2.1-3 00:09:59.115 http://cunit.sourceforge.net/ 00:09:59.115 00:09:59.115 00:09:59.115 Suite: bdevio tests on: Nvme3n1 00:09:59.115 Test: blockdev write read block ...passed 00:09:59.115 Test: blockdev write zeroes read block ...passed 00:09:59.115 Test: blockdev write zeroes read no split ...passed 00:09:59.115 Test: blockdev write zeroes read split ...passed 00:09:59.115 Test: blockdev write zeroes read split partial ...passed 00:09:59.115 Test: blockdev reset ...[2024-07-25 11:34:58.107400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:59.115 [2024-07-25 11:34:58.111744] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:59.115 passed 00:09:59.115 Test: blockdev write read 8 blocks ...passed 00:09:59.115 Test: blockdev write read size > 128k ...passed 00:09:59.115 Test: blockdev write read invalid size ...passed 00:09:59.115 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:59.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:59.115 Test: blockdev write read max offset ...passed 00:09:59.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:59.115 Test: blockdev writev readv 8 blocks ...passed 00:09:59.115 Test: blockdev writev readv 30 x 1block ...passed 00:09:59.115 Test: blockdev writev readv block ...passed 00:09:59.115 Test: blockdev writev readv size > 128k ...passed 00:09:59.115 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:59.115 Test: blockdev comparev and writev ...[2024-07-25 11:34:58.121815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x273c06000 len:0x1000 00:09:59.115 [2024-07-25 11:34:58.121881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:59.115 passed 00:09:59.115 Test: blockdev nvme passthru rw ...passed 00:09:59.115 Test: blockdev nvme passthru vendor specific ...passed 00:09:59.115 Test: blockdev nvme admin passthru ...[2024-07-25 11:34:58.122735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:59.115 [2024-07-25 11:34:58.122789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:59.115 passed 00:09:59.115 Test: blockdev copy ...passed 00:09:59.115 Suite: bdevio tests on: Nvme2n3 00:09:59.115 Test: blockdev write read block ...passed 00:09:59.115 Test: blockdev write zeroes read block ...passed 00:09:59.115 Test: blockdev write zeroes read no split ...passed 00:09:59.373 Test: blockdev write zeroes read split ...passed 00:09:59.373 Test: blockdev write zeroes read split partial ...passed 00:09:59.373 Test: blockdev reset ...[2024-07-25 11:34:58.200569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:59.373 [2024-07-25 11:34:58.205032] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:59.373 passed 00:09:59.373 Test: blockdev write read 8 blocks ...passed 00:09:59.373 Test: blockdev write read size > 128k ...passed 00:09:59.373 Test: blockdev write read invalid size ...passed 00:09:59.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:59.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:59.373 Test: blockdev write read max offset ...passed 00:09:59.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:59.373 Test: blockdev writev readv 8 blocks ...passed 00:09:59.373 Test: blockdev writev readv 30 x 1block ...passed 00:09:59.373 Test: blockdev writev readv block ...passed 00:09:59.373 Test: blockdev writev readv size > 128k ...passed 00:09:59.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:59.373 Test: blockdev comparev and writev ...[2024-07-25 11:34:58.214050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28783c000 len:0x1000 00:09:59.374 [2024-07-25 11:34:58.214111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:59.374 passed 00:09:59.374 Test: blockdev nvme passthru rw ...passed 00:09:59.374 Test: blockdev nvme passthru vendor specific ...passed 00:09:59.374 Test: blockdev nvme admin passthru ...[2024-07-25 11:34:58.214934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:59.374 [2024-07-25 11:34:58.214989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:59.374 passed 00:09:59.374 Test: blockdev copy ...passed 00:09:59.374 Suite: bdevio tests on: Nvme2n2 00:09:59.374 Test: blockdev write read block ...passed 00:09:59.374 Test: blockdev write zeroes read block ...passed 00:09:59.374 Test: blockdev write zeroes read no split ...passed 00:09:59.374 Test: blockdev write zeroes read split ...passed 00:09:59.374 Test: blockdev write zeroes read split partial ...passed 00:09:59.374 Test: blockdev reset ...[2024-07-25 11:34:58.291859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:59.374 [2024-07-25 11:34:58.296370] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:59.374 passed 00:09:59.374 Test: blockdev write read 8 blocks ...passed 00:09:59.374 Test: blockdev write read size > 128k ...passed 00:09:59.374 Test: blockdev write read invalid size ...passed 00:09:59.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:59.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:59.374 Test: blockdev write read max offset ...passed 00:09:59.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:59.374 Test: blockdev writev readv 8 blocks ...passed 00:09:59.374 Test: blockdev writev readv 30 x 1block ...passed 00:09:59.374 Test: blockdev writev readv block ...passed 00:09:59.374 Test: blockdev writev readv size > 128k ...passed 00:09:59.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:59.374 Test: blockdev comparev and writev ...[2024-07-25 11:34:58.306146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x287836000 len:0x1000 00:09:59.374 [2024-07-25 11:34:58.306212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:59.374 passed 00:09:59.374 Test: blockdev nvme passthru rw ...passed 00:09:59.374 Test: blockdev nvme passthru vendor specific ...passed 00:09:59.374 Test: blockdev nvme admin passthru ...[2024-07-25 11:34:58.307053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:59.374 [2024-07-25 11:34:58.307104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:59.374 passed 00:09:59.374 Test: blockdev copy ...passed 00:09:59.374 Suite: bdevio tests on: Nvme2n1 00:09:59.374 Test: blockdev write read block ...passed 00:09:59.374 Test: blockdev write zeroes read block ...passed 00:09:59.374 Test: blockdev write zeroes read no split ...passed 00:09:59.374 Test: blockdev write zeroes read split ...passed 00:09:59.374 Test: blockdev write zeroes read split partial ...passed 00:09:59.374 Test: blockdev reset ...[2024-07-25 11:34:58.376542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:59.374 [2024-07-25 11:34:58.380870] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:59.374 passed 00:09:59.374 Test: blockdev write read 8 blocks ...passed 00:09:59.374 Test: blockdev write read size > 128k ...passed 00:09:59.374 Test: blockdev write read invalid size ...passed 00:09:59.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:59.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:59.374 Test: blockdev write read max offset ...passed 00:09:59.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:59.374 Test: blockdev writev readv 8 blocks ...passed 00:09:59.374 Test: blockdev writev readv 30 x 1block ...passed 00:09:59.374 Test: blockdev writev readv block ...passed 00:09:59.374 Test: blockdev writev readv size > 128k ...passed 00:09:59.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:59.374 Test: blockdev comparev and writev ...[2024-07-25 11:34:58.389854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x287832000 len:0x1000 00:09:59.374 [2024-07-25 11:34:58.389938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:59.374 passed 00:09:59.374 Test: blockdev nvme passthru rw ...passed 00:09:59.374 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:34:58.390785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:59.374 passed 00:09:59.374 Test: blockdev nvme admin passthru ...[2024-07-25 11:34:58.390830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:59.374 passed 00:09:59.374 Test: blockdev copy ...passed 00:09:59.374 Suite: bdevio tests on: Nvme1n1p2 00:09:59.374 Test: blockdev write read block ...passed 00:09:59.374 Test: blockdev write zeroes read block ...passed 00:09:59.374 Test: blockdev write zeroes read no split ...passed 00:09:59.632 Test: blockdev write zeroes read split ...passed 00:09:59.632 Test: blockdev write zeroes read split partial ...passed 00:09:59.632 Test: blockdev reset ...[2024-07-25 11:34:58.468472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:59.632 [2024-07-25 11:34:58.472345] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:59.632 passed 00:09:59.632 Test: blockdev write read 8 blocks ...passed 00:09:59.632 Test: blockdev write read size > 128k ...passed 00:09:59.632 Test: blockdev write read invalid size ...passed 00:09:59.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:59.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:59.632 Test: blockdev write read max offset ...passed 00:09:59.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:59.632 Test: blockdev writev readv 8 blocks ...passed 00:09:59.632 Test: blockdev writev readv 30 x 1block ...passed 00:09:59.632 Test: blockdev writev readv block ...passed 00:09:59.632 Test: blockdev writev readv size > 128k ...passed 00:09:59.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:59.632 Test: blockdev comparev and writev ...[2024-07-25 11:34:58.483503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x28782e000 len:0x1000 00:09:59.632 [2024-07-25 11:34:58.483715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:59.632 passed 00:09:59.633 Test: blockdev nvme passthru rw ...passed 00:09:59.633 Test: blockdev nvme passthru vendor specific ...passed 00:09:59.633 Test: blockdev nvme admin passthru ...passed 00:09:59.633 Test: blockdev copy ...passed 00:09:59.633 Suite: bdevio tests on: Nvme1n1p1 00:09:59.633 Test: blockdev write read block ...passed 00:09:59.633 Test: blockdev write zeroes read block ...passed 00:09:59.633 Test: blockdev write zeroes read no split ...passed 00:09:59.633 Test: blockdev write zeroes read split ...passed 00:09:59.633 Test: blockdev write zeroes read split partial ...passed 00:09:59.633 Test: blockdev reset ...[2024-07-25 11:34:58.551541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:59.633 [2024-07-25 11:34:58.555347] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:59.633 passed 00:09:59.633 Test: blockdev write read 8 blocks ...passed 00:09:59.633 Test: blockdev write read size > 128k ...passed 00:09:59.633 Test: blockdev write read invalid size ...passed 00:09:59.633 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:59.633 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:59.633 Test: blockdev write read max offset ...passed 00:09:59.633 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:59.633 Test: blockdev writev readv 8 blocks ...passed 00:09:59.633 Test: blockdev writev readv 30 x 1block ...passed 00:09:59.633 Test: blockdev writev readv block ...passed 00:09:59.633 Test: blockdev writev readv size > 128k ...passed 00:09:59.633 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:59.633 Test: blockdev comparev and writev ...[2024-07-25 11:34:58.565363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x284c0e000 len:0x1000 00:09:59.633 [2024-07-25 11:34:58.565422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:59.633 passed 00:09:59.633 Test: blockdev nvme passthru rw ...passed 00:09:59.633 Test: blockdev nvme passthru vendor specific ...passed 00:09:59.633 Test: blockdev nvme admin passthru ...passed 00:09:59.633 Test: blockdev copy ...passed 00:09:59.633 Suite: bdevio tests on: Nvme0n1 00:09:59.633 Test: blockdev write read block ...passed 00:09:59.633 Test: blockdev write zeroes read block ...passed 00:09:59.633 Test: blockdev write zeroes read no split ...passed 00:09:59.633 Test: blockdev write zeroes read split ...passed 00:09:59.633 Test: blockdev write zeroes read split partial ...passed 00:09:59.633 Test: blockdev reset ...[2024-07-25 11:34:58.632629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:59.633 [2024-07-25 11:34:58.636422] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:59.633 passed 00:09:59.633 Test: blockdev write read 8 blocks ...passed 00:09:59.633 Test: blockdev write read size > 128k ...passed 00:09:59.633 Test: blockdev write read invalid size ...passed 00:09:59.633 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:59.633 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:59.633 Test: blockdev write read max offset ...passed 00:09:59.633 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:59.633 Test: blockdev writev readv 8 blocks ...passed 00:09:59.633 Test: blockdev writev readv 30 x 1block ...passed 00:09:59.633 Test: blockdev writev readv block ...passed 00:09:59.633 Test: blockdev writev readv size > 128k ...passed 00:09:59.633 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:59.633 Test: blockdev comparev and writev ...passed 00:09:59.633 Test: blockdev nvme passthru rw ...[2024-07-25 11:34:58.643999] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:59.633 separate metadata which is not supported yet. 00:09:59.633 passed 00:09:59.633 Test: blockdev nvme passthru vendor specific ...passed 00:09:59.633 Test: blockdev nvme admin passthru ...[2024-07-25 11:34:58.644637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:59.633 [2024-07-25 11:34:58.644697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:59.633 passed 00:09:59.633 Test: blockdev copy ...passed 00:09:59.633 00:09:59.633 Run Summary: Type Total Ran Passed Failed Inactive 00:09:59.633 suites 7 7 n/a 0 0 00:09:59.633 tests 161 161 161 0 0 00:09:59.633 asserts 1025 1025 1025 0 n/a 00:09:59.633 00:09:59.633 Elapsed time = 1.691 seconds 00:09:59.633 0 00:09:59.633 11:34:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 67050 00:09:59.633 11:34:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 67050 ']' 00:09:59.633 11:34:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 67050 00:09:59.633 11:34:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:09:59.633 11:34:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.891 11:34:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67050 00:09:59.891 11:34:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.891 11:34:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.891 11:34:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67050' 00:09:59.891 killing process with pid 67050 00:09:59.891 11:34:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 67050 00:09:59.891 11:34:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 67050 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:00.825 00:10:00.825 real 0m3.088s 00:10:00.825 user 0m7.532s 00:10:00.825 sys 0m0.452s 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:00.825 ************************************ 00:10:00.825 END TEST bdev_bounds 00:10:00.825 ************************************ 00:10:00.825 11:34:59 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:00.825 11:34:59 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:00.825 11:34:59 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.825 11:34:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:00.825 ************************************ 00:10:00.825 START TEST bdev_nbd 00:10:00.825 ************************************ 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=67114 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 67114 /var/tmp/spdk-nbd.sock 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 67114 ']' 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.825 11:34:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:01.083 [2024-07-25 11:34:59.918181] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:01.083 [2024-07-25 11:34:59.918366] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.083 [2024-07-25 11:35:00.099717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.343 [2024-07-25 11:35:00.380870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:02.278 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:02.536 1+0 records in 00:10:02.536 1+0 records out 00:10:02.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773593 s, 5.3 MB/s 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:02.536 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:02.795 1+0 records in 00:10:02.795 1+0 records out 00:10:02.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694058 s, 5.9 MB/s 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:02.795 11:35:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:03.053 1+0 records in 00:10:03.053 1+0 records out 00:10:03.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00265349 s, 1.5 MB/s 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:03.053 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:03.311 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:03.311 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:03.311 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:03.312 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:10:03.312 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:03.312 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:03.312 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:03.312 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:10:03.312 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:03.312 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:03.312 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:03.312 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:03.569 1+0 records in 00:10:03.569 1+0 records out 00:10:03.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730368 s, 5.6 MB/s 00:10:03.569 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:03.569 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:03.569 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:03.569 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:03.570 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:03.827 1+0 records in 00:10:03.827 1+0 records out 00:10:03.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000932348 s, 4.4 MB/s 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:03.827 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.085 1+0 records in 00:10:04.085 1+0 records out 00:10:04.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000998227 s, 4.1 MB/s 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:04.085 11:35:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.344 1+0 records in 00:10:04.344 1+0 records out 00:10:04.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075479 s, 5.4 MB/s 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:04.344 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd0", 00:10:04.603 "bdev_name": "Nvme0n1" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd1", 00:10:04.603 "bdev_name": "Nvme1n1p1" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd2", 00:10:04.603 "bdev_name": "Nvme1n1p2" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd3", 00:10:04.603 "bdev_name": "Nvme2n1" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd4", 00:10:04.603 "bdev_name": "Nvme2n2" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd5", 00:10:04.603 "bdev_name": "Nvme2n3" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd6", 00:10:04.603 "bdev_name": "Nvme3n1" 00:10:04.603 } 00:10:04.603 ]' 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd0", 00:10:04.603 "bdev_name": "Nvme0n1" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd1", 00:10:04.603 "bdev_name": "Nvme1n1p1" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd2", 00:10:04.603 "bdev_name": "Nvme1n1p2" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd3", 00:10:04.603 "bdev_name": "Nvme2n1" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd4", 00:10:04.603 "bdev_name": "Nvme2n2" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd5", 00:10:04.603 "bdev_name": "Nvme2n3" 00:10:04.603 }, 00:10:04.603 { 00:10:04.603 "nbd_device": "/dev/nbd6", 00:10:04.603 "bdev_name": "Nvme3n1" 00:10:04.603 } 00:10:04.603 ]' 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.603 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:04.861 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:04.861 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:04.861 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:04.861 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:04.861 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:04.861 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:04.861 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:04.861 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:04.861 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.861 11:35:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:05.119 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:05.119 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:05.119 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:05.119 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.119 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.119 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:05.119 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:05.119 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.119 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:05.119 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:05.377 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:05.377 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:05.377 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:05.377 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.377 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.377 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:05.377 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:05.377 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.377 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:05.377 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:05.943 11:35:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:06.201 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:06.201 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:06.201 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:06.201 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.201 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.201 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:06.201 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:06.201 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.201 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.201 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:06.767 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:07.025 11:35:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:07.284 /dev/nbd0 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:07.284 1+0 records in 00:10:07.284 1+0 records out 00:10:07.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466954 s, 8.8 MB/s 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:07.284 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:10:07.542 /dev/nbd1 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:07.542 1+0 records in 00:10:07.542 1+0 records out 00:10:07.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602137 s, 6.8 MB/s 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:07.542 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:10:07.801 /dev/nbd10 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:07.801 1+0 records in 00:10:07.801 1+0 records out 00:10:07.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577644 s, 7.1 MB/s 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:07.801 11:35:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:08.060 /dev/nbd11 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.060 1+0 records in 00:10:08.060 1+0 records out 00:10:08.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728568 s, 5.6 MB/s 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:08.060 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:08.318 /dev/nbd12 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:08.318 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.318 1+0 records in 00:10:08.318 1+0 records out 00:10:08.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000757571 s, 5.4 MB/s 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:08.576 /dev/nbd13 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.576 1+0 records in 00:10:08.576 1+0 records out 00:10:08.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000901719 s, 4.5 MB/s 00:10:08.576 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.834 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:08.834 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.834 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:08.834 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:08.834 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.834 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:08.834 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:09.092 /dev/nbd14 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:09.092 1+0 records in 00:10:09.092 1+0 records out 00:10:09.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000960595 s, 4.3 MB/s 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.092 11:35:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:09.422 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd0", 00:10:09.422 "bdev_name": "Nvme0n1" 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd1", 00:10:09.422 "bdev_name": "Nvme1n1p1" 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd10", 00:10:09.422 "bdev_name": "Nvme1n1p2" 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd11", 00:10:09.422 "bdev_name": "Nvme2n1" 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd12", 00:10:09.422 "bdev_name": "Nvme2n2" 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd13", 00:10:09.422 "bdev_name": "Nvme2n3" 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd14", 00:10:09.422 "bdev_name": "Nvme3n1" 00:10:09.422 } 00:10:09.422 ]' 00:10:09.422 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:09.422 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd0", 00:10:09.422 "bdev_name": "Nvme0n1" 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd1", 00:10:09.422 "bdev_name": "Nvme1n1p1" 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd10", 00:10:09.422 "bdev_name": "Nvme1n1p2" 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "nbd_device": "/dev/nbd11", 00:10:09.423 "bdev_name": "Nvme2n1" 00:10:09.423 }, 00:10:09.423 { 00:10:09.423 "nbd_device": "/dev/nbd12", 00:10:09.423 "bdev_name": "Nvme2n2" 00:10:09.423 }, 00:10:09.423 { 00:10:09.423 "nbd_device": "/dev/nbd13", 00:10:09.423 "bdev_name": "Nvme2n3" 00:10:09.423 }, 00:10:09.423 { 00:10:09.423 "nbd_device": "/dev/nbd14", 00:10:09.423 "bdev_name": "Nvme3n1" 00:10:09.423 } 00:10:09.423 ]' 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:09.423 /dev/nbd1 00:10:09.423 /dev/nbd10 00:10:09.423 /dev/nbd11 00:10:09.423 /dev/nbd12 00:10:09.423 /dev/nbd13 00:10:09.423 /dev/nbd14' 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:09.423 /dev/nbd1 00:10:09.423 /dev/nbd10 00:10:09.423 /dev/nbd11 00:10:09.423 /dev/nbd12 00:10:09.423 /dev/nbd13 00:10:09.423 /dev/nbd14' 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:09.423 256+0 records in 00:10:09.423 256+0 records out 00:10:09.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107325 s, 97.7 MB/s 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:09.423 256+0 records in 00:10:09.423 256+0 records out 00:10:09.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166741 s, 6.3 MB/s 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.423 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:09.681 256+0 records in 00:10:09.681 256+0 records out 00:10:09.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180299 s, 5.8 MB/s 00:10:09.681 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.681 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:09.940 256+0 records in 00:10:09.940 256+0 records out 00:10:09.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184047 s, 5.7 MB/s 00:10:09.940 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.940 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:09.940 256+0 records in 00:10:09.940 256+0 records out 00:10:09.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150541 s, 7.0 MB/s 00:10:09.940 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.940 11:35:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:10.198 256+0 records in 00:10:10.198 256+0 records out 00:10:10.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177395 s, 5.9 MB/s 00:10:10.198 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.198 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:10.457 256+0 records in 00:10:10.457 256+0 records out 00:10:10.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165961 s, 6.3 MB/s 00:10:10.457 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.457 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:10.457 256+0 records in 00:10:10.457 256+0 records out 00:10:10.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168629 s, 6.2 MB/s 00:10:10.457 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:10.457 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:10.457 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:10.457 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:10.457 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:10.457 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:10.457 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:10.457 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:10.716 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:10.976 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:10.976 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:10.976 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:10.976 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:10.976 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:10.976 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:10.976 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:10.976 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:10.976 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:10.976 11:35:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:11.235 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:11.235 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:11.235 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:11.235 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.235 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.235 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:11.235 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.235 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.235 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.235 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:11.492 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:11.492 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:11.492 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:11.492 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.492 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.492 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:11.492 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.492 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.492 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.493 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:11.750 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:11.750 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:11.750 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:11.750 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.750 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.750 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:11.750 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.750 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.750 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.750 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:12.010 11:35:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:12.010 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:12.010 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:12.010 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.010 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.010 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:12.010 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:12.010 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.010 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:12.010 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:12.266 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:12.267 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:12.267 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:12.267 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.267 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.267 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:12.267 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:12.267 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.267 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:12.267 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:12.830 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:13.088 11:35:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:13.348 malloc_lvol_verify 00:10:13.348 11:35:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:13.606 18697ccd-ac47-444b-936c-9025233c226d 00:10:13.606 11:35:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:13.863 753bd3e1-d252-4a15-95be-cbaa26e4f096 00:10:13.863 11:35:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:14.121 /dev/nbd0 00:10:14.121 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:14.121 mke2fs 1.46.5 (30-Dec-2021) 00:10:14.121 Discarding device blocks: 0/4096 done 00:10:14.121 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:14.121 00:10:14.121 Allocating group tables: 0/1 done 00:10:14.121 Writing inode tables: 0/1 done 00:10:14.121 Creating journal (1024 blocks): done 00:10:14.121 Writing superblocks and filesystem accounting information: 0/1 done 00:10:14.121 00:10:14.121 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:14.121 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:14.121 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.121 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:14.121 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:14.121 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:14.121 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.121 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:14.379 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 67114 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 67114 ']' 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 67114 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67114 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.380 killing process with pid 67114 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67114' 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 67114 00:10:14.380 11:35:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 67114 00:10:15.752 11:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:15.752 00:10:15.752 real 0m14.856s 00:10:15.752 user 0m20.926s 00:10:15.752 sys 0m4.854s 00:10:15.752 11:35:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.752 11:35:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:15.752 ************************************ 00:10:15.752 END TEST bdev_nbd 00:10:15.752 ************************************ 00:10:15.752 11:35:14 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:15.752 11:35:14 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:10:15.752 11:35:14 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:10:15.752 skipping fio tests on NVMe due to multi-ns failures. 00:10:15.752 11:35:14 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:15.752 11:35:14 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:15.752 11:35:14 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:15.752 11:35:14 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:15.752 11:35:14 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.752 11:35:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:15.752 ************************************ 00:10:15.752 START TEST bdev_verify 00:10:15.752 ************************************ 00:10:15.752 11:35:14 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:16.010 [2024-07-25 11:35:14.809366] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:16.010 [2024-07-25 11:35:14.809540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67566 ] 00:10:16.010 [2024-07-25 11:35:14.982258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:16.268 [2024-07-25 11:35:15.276280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.268 [2024-07-25 11:35:15.276295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.201 Running I/O for 5 seconds... 00:10:22.465 00:10:22.465 Latency(us) 00:10:22.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.465 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.465 Verification LBA range: start 0x0 length 0xbd0bd 00:10:22.465 Nvme0n1 : 5.11 1214.45 4.74 0.00 0.00 104776.95 16562.73 96754.97 00:10:22.465 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.465 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:22.465 Nvme0n1 : 5.09 1181.31 4.61 0.00 0.00 108109.51 16443.58 92941.96 00:10:22.465 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.465 Verification LBA range: start 0x0 length 0x4ff80 00:10:22.465 Nvme1n1p1 : 5.11 1213.79 4.74 0.00 0.00 104573.44 16205.27 88175.71 00:10:22.465 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.465 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:22.465 Nvme1n1p1 : 5.09 1180.86 4.61 0.00 0.00 107943.16 16443.58 86745.83 00:10:22.465 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.465 Verification LBA range: start 0x0 length 0x4ff7f 00:10:22.465 Nvme1n1p2 : 5.12 1213.12 4.74 0.00 0.00 104391.02 16324.42 86745.83 00:10:22.465 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.465 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:22.465 Nvme1n1p2 : 5.10 1180.43 4.61 0.00 0.00 107726.31 16443.58 85315.96 00:10:22.465 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.466 Verification LBA range: start 0x0 length 0x80000 00:10:22.466 Nvme2n1 : 5.12 1212.54 4.74 0.00 0.00 104189.71 16801.05 83409.45 00:10:22.466 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.466 Verification LBA range: start 0x80000 length 0x80000 00:10:22.466 Nvme2n1 : 5.10 1180.04 4.61 0.00 0.00 107539.62 16681.89 84362.71 00:10:22.466 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.466 Verification LBA range: start 0x0 length 0x80000 00:10:22.466 Nvme2n2 : 5.13 1221.62 4.77 0.00 0.00 103572.30 9889.98 81026.33 00:10:22.466 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.466 Verification LBA range: start 0x80000 length 0x80000 00:10:22.466 Nvme2n2 : 5.10 1179.68 4.61 0.00 0.00 107341.02 16562.73 85792.58 00:10:22.466 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.466 Verification LBA range: start 0x0 length 0x80000 00:10:22.466 Nvme2n3 : 5.14 1221.04 4.77 0.00 0.00 103371.48 10187.87 82456.20 00:10:22.466 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.466 Verification LBA range: start 0x80000 length 0x80000 00:10:22.466 Nvme2n3 : 5.10 1179.26 4.61 0.00 0.00 107139.99 16562.73 87699.08 00:10:22.466 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.466 Verification LBA range: start 0x0 length 0x20000 00:10:22.466 Nvme3n1 : 5.14 1220.47 4.77 0.00 0.00 103213.18 10485.76 88652.33 00:10:22.466 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.466 Verification LBA range: start 0x20000 length 0x20000 00:10:22.466 Nvme3n1 : 5.10 1178.90 4.61 0.00 0.00 106940.00 15728.64 90558.84 00:10:22.466 =================================================================================================================== 00:10:22.466 Total : 16777.50 65.54 0.00 0.00 105740.47 9889.98 96754.97 00:10:23.845 00:10:23.845 real 0m8.119s 00:10:23.845 user 0m14.513s 00:10:23.845 sys 0m0.332s 00:10:23.845 11:35:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.845 11:35:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:23.845 ************************************ 00:10:23.846 END TEST bdev_verify 00:10:23.846 ************************************ 00:10:23.846 11:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:23.846 11:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:23.846 11:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.846 11:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:23.846 ************************************ 00:10:23.846 START TEST bdev_verify_big_io 00:10:23.846 ************************************ 00:10:23.846 11:35:22 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:24.104 [2024-07-25 11:35:22.983603] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:24.104 [2024-07-25 11:35:22.983860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67676 ] 00:10:24.362 [2024-07-25 11:35:23.163105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:24.620 [2024-07-25 11:35:23.445403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.620 [2024-07-25 11:35:23.445419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.555 Running I/O for 5 seconds... 00:10:32.110 00:10:32.110 Latency(us) 00:10:32.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.110 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x0 length 0xbd0b 00:10:32.110 Nvme0n1 : 5.84 110.02 6.88 0.00 0.00 1095210.43 28240.06 1151527.10 00:10:32.110 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:32.110 Nvme0n1 : 5.69 112.21 7.01 0.00 0.00 1093722.81 31218.97 1243039.19 00:10:32.110 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x0 length 0x4ff8 00:10:32.110 Nvme1n1p1 : 5.85 112.72 7.04 0.00 0.00 1055859.55 115819.99 1014258.97 00:10:32.110 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x4ff8 length 0x4ff8 00:10:32.110 Nvme1n1p1 : 5.69 112.51 7.03 0.00 0.00 1063888.34 98184.84 999006.95 00:10:32.110 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x0 length 0x4ff7 00:10:32.110 Nvme1n1p2 : 5.85 113.35 7.08 0.00 0.00 1019355.19 165865.66 880803.84 00:10:32.110 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x4ff7 length 0x4ff7 00:10:32.110 Nvme1n1p2 : 5.84 114.34 7.15 0.00 0.00 1013353.88 117249.86 1082893.03 00:10:32.110 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x0 length 0x8000 00:10:32.110 Nvme2n1 : 6.02 124.08 7.75 0.00 0.00 919839.11 40513.16 880803.84 00:10:32.110 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x8000 length 0x8000 00:10:32.110 Nvme2n1 : 5.90 119.38 7.46 0.00 0.00 954095.84 51237.24 991380.95 00:10:32.110 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x0 length 0x8000 00:10:32.110 Nvme2n2 : 6.02 123.36 7.71 0.00 0.00 895244.85 40274.85 903681.86 00:10:32.110 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x8000 length 0x8000 00:10:32.110 Nvme2n2 : 5.98 124.64 7.79 0.00 0.00 890026.54 39559.91 1014258.97 00:10:32.110 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x0 length 0x8000 00:10:32.110 Nvme2n3 : 6.06 118.91 7.43 0.00 0.00 900142.54 37891.72 1776859.69 00:10:32.110 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x8000 length 0x8000 00:10:32.110 Nvme2n3 : 5.98 128.43 8.03 0.00 0.00 841188.54 37653.41 1029510.98 00:10:32.110 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x0 length 0x2000 00:10:32.110 Nvme3n1 : 6.08 132.52 8.28 0.00 0.00 791711.51 8519.68 1814989.73 00:10:32.110 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:32.110 Verification LBA range: start 0x2000 length 0x2000 00:10:32.110 Nvme3n1 : 6.08 147.45 9.22 0.00 0.00 714653.06 1437.32 1052389.00 00:10:32.110 =================================================================================================================== 00:10:32.110 Total : 1693.90 105.87 0.00 0.00 935364.26 1437.32 1814989.73 00:10:33.482 00:10:33.482 real 0m9.504s 00:10:33.482 user 0m17.390s 00:10:33.482 sys 0m0.375s 00:10:33.482 11:35:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.482 11:35:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:33.482 ************************************ 00:10:33.483 END TEST bdev_verify_big_io 00:10:33.483 ************************************ 00:10:33.483 11:35:32 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:33.483 11:35:32 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:33.483 11:35:32 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.483 11:35:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.483 ************************************ 00:10:33.483 START TEST bdev_write_zeroes 00:10:33.483 ************************************ 00:10:33.483 11:35:32 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:33.483 [2024-07-25 11:35:32.530166] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:33.483 [2024-07-25 11:35:32.530345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67796 ] 00:10:33.739 [2024-07-25 11:35:32.696134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.997 [2024-07-25 11:35:32.940253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.929 Running I/O for 1 seconds... 00:10:35.870 00:10:35.870 Latency(us) 00:10:35.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.870 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.870 Nvme0n1 : 1.02 7796.49 30.46 0.00 0.00 16350.86 13583.83 28716.68 00:10:35.870 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.870 Nvme1n1p1 : 1.02 7786.19 30.41 0.00 0.00 16341.37 13762.56 29074.15 00:10:35.870 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.870 Nvme1n1p2 : 1.02 7823.27 30.56 0.00 0.00 16210.55 8757.99 25141.99 00:10:35.870 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.870 Nvme2n1 : 1.02 7813.81 30.52 0.00 0.00 16192.33 9175.04 24307.90 00:10:35.870 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.870 Nvme2n2 : 1.03 7804.42 30.49 0.00 0.00 16182.15 9532.51 24546.21 00:10:35.870 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.870 Nvme2n3 : 1.03 7795.16 30.45 0.00 0.00 16146.73 9592.09 22758.87 00:10:35.870 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.870 Nvme3n1 : 1.03 7785.89 30.41 0.00 0.00 16125.56 9889.98 22163.08 00:10:35.870 =================================================================================================================== 00:10:35.870 Total : 54605.24 213.30 0.00 0.00 16221.08 8757.99 29074.15 00:10:37.247 00:10:37.247 real 0m3.471s 00:10:37.247 user 0m3.044s 00:10:37.247 sys 0m0.303s 00:10:37.248 11:35:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.248 11:35:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:37.248 ************************************ 00:10:37.248 END TEST bdev_write_zeroes 00:10:37.248 ************************************ 00:10:37.248 11:35:35 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:37.248 11:35:35 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:37.248 11:35:35 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.248 11:35:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:37.248 ************************************ 00:10:37.248 START TEST bdev_json_nonenclosed 00:10:37.248 ************************************ 00:10:37.248 11:35:35 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:37.248 [2024-07-25 11:35:36.070027] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:37.248 [2024-07-25 11:35:36.070236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67856 ] 00:10:37.248 [2024-07-25 11:35:36.249147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.505 [2024-07-25 11:35:36.488793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.505 [2024-07-25 11:35:36.488937] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:37.505 [2024-07-25 11:35:36.488974] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:37.505 [2024-07-25 11:35:36.488994] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:38.072 00:10:38.072 real 0m0.947s 00:10:38.072 user 0m0.686s 00:10:38.072 sys 0m0.154s 00:10:38.072 11:35:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.072 11:35:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:38.072 ************************************ 00:10:38.072 END TEST bdev_json_nonenclosed 00:10:38.072 ************************************ 00:10:38.072 11:35:36 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:38.072 11:35:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:38.072 11:35:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.072 11:35:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:38.072 ************************************ 00:10:38.072 START TEST bdev_json_nonarray 00:10:38.072 ************************************ 00:10:38.072 11:35:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:38.072 [2024-07-25 11:35:37.077381] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:38.072 [2024-07-25 11:35:37.077578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67887 ] 00:10:38.330 [2024-07-25 11:35:37.252309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.589 [2024-07-25 11:35:37.493832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.589 [2024-07-25 11:35:37.494003] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:38.589 [2024-07-25 11:35:37.494042] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:38.589 [2024-07-25 11:35:37.494062] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:39.156 00:10:39.156 real 0m0.963s 00:10:39.156 user 0m0.696s 00:10:39.156 sys 0m0.161s 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:39.156 ************************************ 00:10:39.156 END TEST bdev_json_nonarray 00:10:39.156 ************************************ 00:10:39.156 11:35:37 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:10:39.156 11:35:37 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:10:39.156 11:35:37 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:39.156 11:35:37 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:39.156 11:35:37 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.156 11:35:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:39.156 ************************************ 00:10:39.156 START TEST bdev_gpt_uuid 00:10:39.156 ************************************ 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67918 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 67918 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 67918 ']' 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.156 11:35:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:39.156 [2024-07-25 11:35:38.078575] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:39.156 [2024-07-25 11:35:38.078790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67918 ] 00:10:39.414 [2024-07-25 11:35:38.244296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.672 [2024-07-25 11:35:38.503177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.606 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.606 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:10:40.606 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:40.606 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.606 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:40.606 Some configs were skipped because the RPC state that can call them passed over. 00:10:40.606 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.606 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:10:40.606 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.606 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:40.864 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.864 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:40.864 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.864 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:40.864 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.864 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:10:40.864 { 00:10:40.864 "name": "Nvme1n1p1", 00:10:40.864 "aliases": [ 00:10:40.864 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:40.864 ], 00:10:40.864 "product_name": "GPT Disk", 00:10:40.864 "block_size": 4096, 00:10:40.864 "num_blocks": 655104, 00:10:40.864 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:40.864 "assigned_rate_limits": { 00:10:40.864 "rw_ios_per_sec": 0, 00:10:40.864 "rw_mbytes_per_sec": 0, 00:10:40.864 "r_mbytes_per_sec": 0, 00:10:40.864 "w_mbytes_per_sec": 0 00:10:40.864 }, 00:10:40.864 "claimed": false, 00:10:40.864 "zoned": false, 00:10:40.864 "supported_io_types": { 00:10:40.864 "read": true, 00:10:40.864 "write": true, 00:10:40.864 "unmap": true, 00:10:40.864 "flush": true, 00:10:40.864 "reset": true, 00:10:40.864 "nvme_admin": false, 00:10:40.864 "nvme_io": false, 00:10:40.865 "nvme_io_md": false, 00:10:40.865 "write_zeroes": true, 00:10:40.865 "zcopy": false, 00:10:40.865 "get_zone_info": false, 00:10:40.865 "zone_management": false, 00:10:40.865 "zone_append": false, 00:10:40.865 "compare": true, 00:10:40.865 "compare_and_write": false, 00:10:40.865 "abort": true, 00:10:40.865 "seek_hole": false, 00:10:40.865 "seek_data": false, 00:10:40.865 "copy": true, 00:10:40.865 "nvme_iov_md": false 00:10:40.865 }, 00:10:40.865 "driver_specific": { 00:10:40.865 "gpt": { 00:10:40.865 "base_bdev": "Nvme1n1", 00:10:40.865 "offset_blocks": 256, 00:10:40.865 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:40.865 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:40.865 "partition_name": "SPDK_TEST_first" 00:10:40.865 } 00:10:40.865 } 00:10:40.865 } 00:10:40.865 ]' 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:10:40.865 { 00:10:40.865 "name": "Nvme1n1p2", 00:10:40.865 "aliases": [ 00:10:40.865 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:40.865 ], 00:10:40.865 "product_name": "GPT Disk", 00:10:40.865 "block_size": 4096, 00:10:40.865 "num_blocks": 655103, 00:10:40.865 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:40.865 "assigned_rate_limits": { 00:10:40.865 "rw_ios_per_sec": 0, 00:10:40.865 "rw_mbytes_per_sec": 0, 00:10:40.865 "r_mbytes_per_sec": 0, 00:10:40.865 "w_mbytes_per_sec": 0 00:10:40.865 }, 00:10:40.865 "claimed": false, 00:10:40.865 "zoned": false, 00:10:40.865 "supported_io_types": { 00:10:40.865 "read": true, 00:10:40.865 "write": true, 00:10:40.865 "unmap": true, 00:10:40.865 "flush": true, 00:10:40.865 "reset": true, 00:10:40.865 "nvme_admin": false, 00:10:40.865 "nvme_io": false, 00:10:40.865 "nvme_io_md": false, 00:10:40.865 "write_zeroes": true, 00:10:40.865 "zcopy": false, 00:10:40.865 "get_zone_info": false, 00:10:40.865 "zone_management": false, 00:10:40.865 "zone_append": false, 00:10:40.865 "compare": true, 00:10:40.865 "compare_and_write": false, 00:10:40.865 "abort": true, 00:10:40.865 "seek_hole": false, 00:10:40.865 "seek_data": false, 00:10:40.865 "copy": true, 00:10:40.865 "nvme_iov_md": false 00:10:40.865 }, 00:10:40.865 "driver_specific": { 00:10:40.865 "gpt": { 00:10:40.865 "base_bdev": "Nvme1n1", 00:10:40.865 "offset_blocks": 655360, 00:10:40.865 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:40.865 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:40.865 "partition_name": "SPDK_TEST_second" 00:10:40.865 } 00:10:40.865 } 00:10:40.865 } 00:10:40.865 ]' 00:10:40.865 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:10:41.123 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:10:41.123 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:10:41.123 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:41.123 11:35:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 67918 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 67918 ']' 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 67918 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67918 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.123 killing process with pid 67918 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67918' 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 67918 00:10:41.123 11:35:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 67918 00:10:43.652 00:10:43.652 real 0m4.323s 00:10:43.652 user 0m4.493s 00:10:43.652 sys 0m0.571s 00:10:43.652 11:35:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.652 ************************************ 00:10:43.652 END TEST bdev_gpt_uuid 00:10:43.652 ************************************ 00:10:43.652 11:35:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:43.652 11:35:42 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:10:43.652 11:35:42 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:10:43.652 11:35:42 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:10:43.652 11:35:42 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:43.652 11:35:42 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:43.652 11:35:42 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:43.652 11:35:42 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:43.652 11:35:42 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:43.652 11:35:42 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:43.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.910 Waiting for block devices as requested 00:10:43.910 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.169 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.169 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.169 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.433 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:49.433 11:35:48 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:10:49.433 11:35:48 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:10:49.691 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:49.691 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:49.691 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:49.691 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:49.691 11:35:48 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:49.691 00:10:49.691 real 1m8.081s 00:10:49.691 user 1m26.277s 00:10:49.691 sys 0m10.688s 00:10:49.691 11:35:48 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.691 11:35:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 ************************************ 00:10:49.692 END TEST blockdev_nvme_gpt 00:10:49.692 ************************************ 00:10:49.692 11:35:48 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:49.692 11:35:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:49.692 11:35:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.692 11:35:48 -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 ************************************ 00:10:49.692 START TEST nvme 00:10:49.692 ************************************ 00:10:49.692 11:35:48 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:49.692 * Looking for test storage... 00:10:49.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:49.692 11:35:48 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:50.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.823 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.823 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.823 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.823 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.081 11:35:49 nvme -- nvme/nvme.sh@79 -- # uname 00:10:51.081 11:35:49 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:51.081 11:35:49 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:51.081 11:35:49 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:51.081 11:35:49 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:51.081 11:35:49 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:10:51.081 11:35:49 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:10:51.081 Waiting for stub to ready for secondary processes... 00:10:51.081 11:35:49 nvme -- common/autotest_common.sh@1071 -- # stubpid=68555 00:10:51.081 11:35:49 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:51.081 11:35:49 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:10:51.081 11:35:49 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:51.081 11:35:49 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68555 ]] 00:10:51.081 11:35:49 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:10:51.081 [2024-07-25 11:35:49.988604] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:51.081 [2024-07-25 11:35:49.988827] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:52.015 11:35:50 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:52.015 11:35:50 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68555 ]] 00:10:52.015 11:35:50 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:10:52.579 [2024-07-25 11:35:51.410776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.835 [2024-07-25 11:35:51.721698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.835 [2024-07-25 11:35:51.721807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.835 [2024-07-25 11:35:51.721831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.835 [2024-07-25 11:35:51.750755] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:52.835 [2024-07-25 11:35:51.750868] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:52.836 [2024-07-25 11:35:51.761672] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:52.836 [2024-07-25 11:35:51.762105] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:52.836 [2024-07-25 11:35:51.766047] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:52.836 [2024-07-25 11:35:51.766329] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:52.836 [2024-07-25 11:35:51.766438] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:52.836 [2024-07-25 11:35:51.769542] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:52.836 [2024-07-25 11:35:51.769819] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:52.836 [2024-07-25 11:35:51.769943] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:52.836 [2024-07-25 11:35:51.773377] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:52.836 [2024-07-25 11:35:51.773740] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:52.836 [2024-07-25 11:35:51.773840] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:52.836 [2024-07-25 11:35:51.773915] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:52.836 [2024-07-25 11:35:51.774012] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:53.093 done. 00:10:53.093 11:35:51 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:53.093 11:35:51 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:10:53.093 11:35:51 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:53.093 11:35:51 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:10:53.093 11:35:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.093 11:35:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:53.093 ************************************ 00:10:53.093 START TEST nvme_reset 00:10:53.093 ************************************ 00:10:53.093 11:35:51 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:53.352 Initializing NVMe Controllers 00:10:53.352 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:53.352 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:53.352 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:53.352 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:53.352 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:53.352 00:10:53.352 real 0m0.327s 00:10:53.352 user 0m0.126s 00:10:53.352 sys 0m0.155s 00:10:53.352 11:35:52 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.352 11:35:52 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:53.352 ************************************ 00:10:53.352 END TEST nvme_reset 00:10:53.352 ************************************ 00:10:53.352 11:35:52 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:53.352 11:35:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:53.352 11:35:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.352 11:35:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:53.352 ************************************ 00:10:53.352 START TEST nvme_identify 00:10:53.352 ************************************ 00:10:53.352 11:35:52 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:10:53.352 11:35:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:53.352 11:35:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:53.352 11:35:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:53.352 11:35:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:53.352 11:35:52 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:53.352 11:35:52 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:10:53.352 11:35:52 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:53.352 11:35:52 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:53.352 11:35:52 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:53.352 11:35:52 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:53.352 11:35:52 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:53.352 11:35:52 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:53.919 [2024-07-25 11:35:52.661641] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 68585 terminated unexpected 00:10:53.919 ===================================================== 00:10:53.919 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:53.919 ===================================================== 00:10:53.919 Controller Capabilities/Features 00:10:53.919 ================================ 00:10:53.919 Vendor ID: 1b36 00:10:53.919 Subsystem Vendor ID: 1af4 00:10:53.919 Serial Number: 12340 00:10:53.919 Model Number: QEMU NVMe Ctrl 00:10:53.919 Firmware Version: 8.0.0 00:10:53.919 Recommended Arb Burst: 6 00:10:53.919 IEEE OUI Identifier: 00 54 52 00:10:53.919 Multi-path I/O 00:10:53.919 May have multiple subsystem ports: No 00:10:53.919 May have multiple controllers: No 00:10:53.919 Associated with SR-IOV VF: No 00:10:53.919 Max Data Transfer Size: 524288 00:10:53.919 Max Number of Namespaces: 256 00:10:53.919 Max Number of I/O Queues: 64 00:10:53.919 NVMe Specification Version (VS): 1.4 00:10:53.919 NVMe Specification Version (Identify): 1.4 00:10:53.919 Maximum Queue Entries: 2048 00:10:53.919 Contiguous Queues Required: Yes 00:10:53.919 Arbitration Mechanisms Supported 00:10:53.919 Weighted Round Robin: Not Supported 00:10:53.919 Vendor Specific: Not Supported 00:10:53.919 Reset Timeout: 7500 ms 00:10:53.919 Doorbell Stride: 4 bytes 00:10:53.919 NVM Subsystem Reset: Not Supported 00:10:53.919 Command Sets Supported 00:10:53.919 NVM Command Set: Supported 00:10:53.919 Boot Partition: Not Supported 00:10:53.919 Memory Page Size Minimum: 4096 bytes 00:10:53.919 Memory Page Size Maximum: 65536 bytes 00:10:53.919 Persistent Memory Region: Not Supported 00:10:53.919 Optional Asynchronous Events Supported 00:10:53.919 Namespace Attribute Notices: Supported 00:10:53.919 Firmware Activation Notices: Not Supported 00:10:53.919 ANA Change Notices: Not Supported 00:10:53.919 PLE Aggregate Log Change Notices: Not Supported 00:10:53.919 LBA Status Info Alert Notices: Not Supported 00:10:53.919 EGE Aggregate Log Change Notices: Not Supported 00:10:53.919 Normal NVM Subsystem Shutdown event: Not Supported 00:10:53.919 Zone Descriptor Change Notices: Not Supported 00:10:53.919 Discovery Log Change Notices: Not Supported 00:10:53.919 Controller Attributes 00:10:53.919 128-bit Host Identifier: Not Supported 00:10:53.919 Non-Operational Permissive Mode: Not Supported 00:10:53.919 NVM Sets: Not Supported 00:10:53.919 Read Recovery Levels: Not Supported 00:10:53.919 Endurance Groups: Not Supported 00:10:53.919 Predictable Latency Mode: Not Supported 00:10:53.919 Traffic Based Keep ALive: Not Supported 00:10:53.919 Namespace Granularity: Not Supported 00:10:53.919 SQ Associations: Not Supported 00:10:53.919 UUID List: Not Supported 00:10:53.919 Multi-Domain Subsystem: Not Supported 00:10:53.919 Fixed Capacity Management: Not Supported 00:10:53.919 Variable Capacity Management: Not Supported 00:10:53.919 Delete Endurance Group: Not Supported 00:10:53.919 Delete NVM Set: Not Supported 00:10:53.919 Extended LBA Formats Supported: Supported 00:10:53.919 Flexible Data Placement Supported: Not Supported 00:10:53.919 00:10:53.919 Controller Memory Buffer Support 00:10:53.919 ================================ 00:10:53.919 Supported: No 00:10:53.919 00:10:53.919 Persistent Memory Region Support 00:10:53.919 ================================ 00:10:53.919 Supported: No 00:10:53.919 00:10:53.919 Admin Command Set Attributes 00:10:53.920 ============================ 00:10:53.920 Security Send/Receive: Not Supported 00:10:53.920 Format NVM: Supported 00:10:53.920 Firmware Activate/Download: Not Supported 00:10:53.920 Namespace Management: Supported 00:10:53.920 Device Self-Test: Not Supported 00:10:53.920 Directives: Supported 00:10:53.920 NVMe-MI: Not Supported 00:10:53.920 Virtualization Management: Not Supported 00:10:53.920 Doorbell Buffer Config: Supported 00:10:53.920 Get LBA Status Capability: Not Supported 00:10:53.920 Command & Feature Lockdown Capability: Not Supported 00:10:53.920 Abort Command Limit: 4 00:10:53.920 Async Event Request Limit: 4 00:10:53.920 Number of Firmware Slots: N/A 00:10:53.920 Firmware Slot 1 Read-Only: N/A 00:10:53.920 Firmware Activation Without Reset: N/A 00:10:53.920 Multiple Update Detection Support: N/A 00:10:53.920 Firmware Update Granularity: No Information Provided 00:10:53.920 Per-Namespace SMART Log: Yes 00:10:53.920 Asymmetric Namespace Access Log Page: Not Supported 00:10:53.920 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:53.920 Command Effects Log Page: Supported 00:10:53.920 Get Log Page Extended Data: Supported 00:10:53.920 Telemetry Log Pages: Not Supported 00:10:53.920 Persistent Event Log Pages: Not Supported 00:10:53.920 Supported Log Pages Log Page: May Support 00:10:53.920 Commands Supported & Effects Log Page: Not Supported 00:10:53.920 Feature Identifiers & Effects Log Page:May Support 00:10:53.920 NVMe-MI Commands & Effects Log Page: May Support 00:10:53.920 Data Area 4 for Telemetry Log: Not Supported 00:10:53.920 Error Log Page Entries Supported: 1 00:10:53.920 Keep Alive: Not Supported 00:10:53.920 00:10:53.920 NVM Command Set Attributes 00:10:53.920 ========================== 00:10:53.920 Submission Queue Entry Size 00:10:53.920 Max: 64 00:10:53.920 Min: 64 00:10:53.920 Completion Queue Entry Size 00:10:53.920 Max: 16 00:10:53.920 Min: 16 00:10:53.920 Number of Namespaces: 256 00:10:53.920 Compare Command: Supported 00:10:53.920 Write Uncorrectable Command: Not Supported 00:10:53.920 Dataset Management Command: Supported 00:10:53.920 Write Zeroes Command: Supported 00:10:53.920 Set Features Save Field: Supported 00:10:53.920 Reservations: Not Supported 00:10:53.920 Timestamp: Supported 00:10:53.920 Copy: Supported 00:10:53.920 Volatile Write Cache: Present 00:10:53.920 Atomic Write Unit (Normal): 1 00:10:53.920 Atomic Write Unit (PFail): 1 00:10:53.920 Atomic Compare & Write Unit: 1 00:10:53.920 Fused Compare & Write: Not Supported 00:10:53.920 Scatter-Gather List 00:10:53.920 SGL Command Set: Supported 00:10:53.920 SGL Keyed: Not Supported 00:10:53.920 SGL Bit Bucket Descriptor: Not Supported 00:10:53.920 SGL Metadata Pointer: Not Supported 00:10:53.920 Oversized SGL: Not Supported 00:10:53.920 SGL Metadata Address: Not Supported 00:10:53.920 SGL Offset: Not Supported 00:10:53.920 Transport SGL Data Block: Not Supported 00:10:53.920 Replay Protected Memory Block: Not Supported 00:10:53.920 00:10:53.920 Firmware Slot Information 00:10:53.920 ========================= 00:10:53.920 Active slot: 1 00:10:53.920 Slot 1 Firmware Revision: 1.0 00:10:53.920 00:10:53.920 00:10:53.920 Commands Supported and Effects 00:10:53.920 ============================== 00:10:53.920 Admin Commands 00:10:53.920 -------------- 00:10:53.920 Delete I/O Submission Queue (00h): Supported 00:10:53.920 Create I/O Submission Queue (01h): Supported 00:10:53.920 Get Log Page (02h): Supported 00:10:53.920 Delete I/O Completion Queue (04h): Supported 00:10:53.920 Create I/O Completion Queue (05h): Supported 00:10:53.920 Identify (06h): Supported 00:10:53.920 Abort (08h): Supported 00:10:53.920 Set Features (09h): Supported 00:10:53.920 Get Features (0Ah): Supported 00:10:53.920 Asynchronous Event Request (0Ch): Supported 00:10:53.920 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:53.920 Directive Send (19h): Supported 00:10:53.920 Directive Receive (1Ah): Supported 00:10:53.920 Virtualization Management (1Ch): Supported 00:10:53.920 Doorbell Buffer Config (7Ch): Supported 00:10:53.920 Format NVM (80h): Supported LBA-Change 00:10:53.920 I/O Commands 00:10:53.920 ------------ 00:10:53.920 Flush (00h): Supported LBA-Change 00:10:53.920 Write (01h): Supported LBA-Change 00:10:53.920 Read (02h): Supported 00:10:53.920 Compare (05h): Supported 00:10:53.920 Write Zeroes (08h): Supported LBA-Change 00:10:53.920 Dataset Management (09h): Supported LBA-Change 00:10:53.920 Unknown (0Ch): Supported 00:10:53.920 Unknown (12h): Supported 00:10:53.920 Copy (19h): Supported LBA-Change 00:10:53.920 Unknown (1Dh): Supported LBA-Change 00:10:53.920 00:10:53.920 Error Log 00:10:53.920 ========= 00:10:53.920 00:10:53.920 Arbitration 00:10:53.920 =========== 00:10:53.920 Arbitration Burst: no limit 00:10:53.920 00:10:53.920 Power Management 00:10:53.920 ================ 00:10:53.920 Number of Power States: 1 00:10:53.920 Current Power State: Power State #0 00:10:53.920 Power State #0: 00:10:53.920 Max Power: 25.00 W 00:10:53.920 Non-Operational State: Operational 00:10:53.920 Entry Latency: 16 microseconds 00:10:53.920 Exit Latency: 4 microseconds 00:10:53.920 Relative Read Throughput: 0 00:10:53.920 Relative Read Latency: 0 00:10:53.920 Relative Write Throughput: 0 00:10:53.920 Relative Write Latency: 0 00:10:53.920 Idle Power[2024-07-25 11:35:52.663068] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 68585 terminated unexpected 00:10:53.920 : Not Reported 00:10:53.920 Active Power: Not Reported 00:10:53.920 Non-Operational Permissive Mode: Not Supported 00:10:53.920 00:10:53.920 Health Information 00:10:53.920 ================== 00:10:53.920 Critical Warnings: 00:10:53.920 Available Spare Space: OK 00:10:53.920 Temperature: OK 00:10:53.920 Device Reliability: OK 00:10:53.920 Read Only: No 00:10:53.920 Volatile Memory Backup: OK 00:10:53.920 Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.920 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:53.920 Available Spare: 0% 00:10:53.920 Available Spare Threshold: 0% 00:10:53.920 Life Percentage Used: 0% 00:10:53.920 Data Units Read: 683 00:10:53.920 Data Units Written: 574 00:10:53.920 Host Read Commands: 32033 00:10:53.920 Host Write Commands: 31071 00:10:53.920 Controller Busy Time: 0 minutes 00:10:53.920 Power Cycles: 0 00:10:53.920 Power On Hours: 0 hours 00:10:53.920 Unsafe Shutdowns: 0 00:10:53.920 Unrecoverable Media Errors: 0 00:10:53.920 Lifetime Error Log Entries: 0 00:10:53.920 Warning Temperature Time: 0 minutes 00:10:53.920 Critical Temperature Time: 0 minutes 00:10:53.920 00:10:53.920 Number of Queues 00:10:53.920 ================ 00:10:53.920 Number of I/O Submission Queues: 64 00:10:53.920 Number of I/O Completion Queues: 64 00:10:53.920 00:10:53.920 ZNS Specific Controller Data 00:10:53.920 ============================ 00:10:53.920 Zone Append Size Limit: 0 00:10:53.920 00:10:53.920 00:10:53.920 Active Namespaces 00:10:53.920 ================= 00:10:53.920 Namespace ID:1 00:10:53.920 Error Recovery Timeout: Unlimited 00:10:53.920 Command Set Identifier: NVM (00h) 00:10:53.920 Deallocate: Supported 00:10:53.920 Deallocated/Unwritten Error: Supported 00:10:53.920 Deallocated Read Value: All 0x00 00:10:53.920 Deallocate in Write Zeroes: Not Supported 00:10:53.920 Deallocated Guard Field: 0xFFFF 00:10:53.920 Flush: Supported 00:10:53.920 Reservation: Not Supported 00:10:53.920 Metadata Transferred as: Separate Metadata Buffer 00:10:53.920 Namespace Sharing Capabilities: Private 00:10:53.920 Size (in LBAs): 1548666 (5GiB) 00:10:53.920 Capacity (in LBAs): 1548666 (5GiB) 00:10:53.920 Utilization (in LBAs): 1548666 (5GiB) 00:10:53.920 Thin Provisioning: Not Supported 00:10:53.920 Per-NS Atomic Units: No 00:10:53.920 Maximum Single Source Range Length: 128 00:10:53.920 Maximum Copy Length: 128 00:10:53.920 Maximum Source Range Count: 128 00:10:53.921 NGUID/EUI64 Never Reused: No 00:10:53.921 Namespace Write Protected: No 00:10:53.921 Number of LBA Formats: 8 00:10:53.921 Current LBA Format: LBA Format #07 00:10:53.921 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:53.921 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:53.921 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:53.921 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:53.921 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:53.921 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:53.921 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:53.921 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:53.921 00:10:53.921 NVM Specific Namespace Data 00:10:53.921 =========================== 00:10:53.921 Logical Block Storage Tag Mask: 0 00:10:53.921 Protection Information Capabilities: 00:10:53.921 16b Guard Protection Information Storage Tag Support: No 00:10:53.921 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:53.921 Storage Tag Check Read Support: No 00:10:53.921 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.921 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.921 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.921 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.921 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.921 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.921 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.921 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.921 ===================================================== 00:10:53.921 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:53.921 ===================================================== 00:10:53.921 Controller Capabilities/Features 00:10:53.921 ================================ 00:10:53.921 Vendor ID: 1b36 00:10:53.921 Subsystem Vendor ID: 1af4 00:10:53.921 Serial Number: 12341 00:10:53.921 Model Number: QEMU NVMe Ctrl 00:10:53.921 Firmware Version: 8.0.0 00:10:53.921 Recommended Arb Burst: 6 00:10:53.921 IEEE OUI Identifier: 00 54 52 00:10:53.921 Multi-path I/O 00:10:53.921 May have multiple subsystem ports: No 00:10:53.921 May have multiple controllers: No 00:10:53.921 Associated with SR-IOV VF: No 00:10:53.921 Max Data Transfer Size: 524288 00:10:53.921 Max Number of Namespaces: 256 00:10:53.921 Max Number of I/O Queues: 64 00:10:53.921 NVMe Specification Version (VS): 1.4 00:10:53.921 NVMe Specification Version (Identify): 1.4 00:10:53.921 Maximum Queue Entries: 2048 00:10:53.921 Contiguous Queues Required: Yes 00:10:53.921 Arbitration Mechanisms Supported 00:10:53.921 Weighted Round Robin: Not Supported 00:10:53.921 Vendor Specific: Not Supported 00:10:53.921 Reset Timeout: 7500 ms 00:10:53.921 Doorbell Stride: 4 bytes 00:10:53.921 NVM Subsystem Reset: Not Supported 00:10:53.921 Command Sets Supported 00:10:53.921 NVM Command Set: Supported 00:10:53.921 Boot Partition: Not Supported 00:10:53.921 Memory Page Size Minimum: 4096 bytes 00:10:53.921 Memory Page Size Maximum: 65536 bytes 00:10:53.921 Persistent Memory Region: Not Supported 00:10:53.921 Optional Asynchronous Events Supported 00:10:53.921 Namespace Attribute Notices: Supported 00:10:53.921 Firmware Activation Notices: Not Supported 00:10:53.921 ANA Change Notices: Not Supported 00:10:53.921 PLE Aggregate Log Change Notices: Not Supported 00:10:53.921 LBA Status Info Alert Notices: Not Supported 00:10:53.921 EGE Aggregate Log Change Notices: Not Supported 00:10:53.921 Normal NVM Subsystem Shutdown event: Not Supported 00:10:53.921 Zone Descriptor Change Notices: Not Supported 00:10:53.921 Discovery Log Change Notices: Not Supported 00:10:53.921 Controller Attributes 00:10:53.921 128-bit Host Identifier: Not Supported 00:10:53.921 Non-Operational Permissive Mode: Not Supported 00:10:53.921 NVM Sets: Not Supported 00:10:53.921 Read Recovery Levels: Not Supported 00:10:53.921 Endurance Groups: Not Supported 00:10:53.921 Predictable Latency Mode: Not Supported 00:10:53.921 Traffic Based Keep ALive: Not Supported 00:10:53.921 Namespace Granularity: Not Supported 00:10:53.921 SQ Associations: Not Supported 00:10:53.921 UUID List: Not Supported 00:10:53.921 Multi-Domain Subsystem: Not Supported 00:10:53.921 Fixed Capacity Management: Not Supported 00:10:53.921 Variable Capacity Management: Not Supported 00:10:53.921 Delete Endurance Group: Not Supported 00:10:53.921 Delete NVM Set: Not Supported 00:10:53.921 Extended LBA Formats Supported: Supported 00:10:53.921 Flexible Data Placement Supported: Not Supported 00:10:53.921 00:10:53.921 Controller Memory Buffer Support 00:10:53.921 ================================ 00:10:53.921 Supported: No 00:10:53.921 00:10:53.921 Persistent Memory Region Support 00:10:53.921 ================================ 00:10:53.921 Supported: No 00:10:53.921 00:10:53.921 Admin Command Set Attributes 00:10:53.921 ============================ 00:10:53.921 Security Send/Receive: Not Supported 00:10:53.921 Format NVM: Supported 00:10:53.921 Firmware Activate/Download: Not Supported 00:10:53.921 Namespace Management: Supported 00:10:53.921 Device Self-Test: Not Supported 00:10:53.921 Directives: Supported 00:10:53.921 NVMe-MI: Not Supported 00:10:53.921 Virtualization Management: Not Supported 00:10:53.921 Doorbell Buffer Config: Supported 00:10:53.921 Get LBA Status Capability: Not Supported 00:10:53.921 Command & Feature Lockdown Capability: Not Supported 00:10:53.921 Abort Command Limit: 4 00:10:53.921 Async Event Request Limit: 4 00:10:53.921 Number of Firmware Slots: N/A 00:10:53.921 Firmware Slot 1 Read-Only: N/A 00:10:53.921 Firmware Activation Without Reset: N/A 00:10:53.921 Multiple Update Detection Support: N/A 00:10:53.921 Firmware Update Granularity: No Information Provided 00:10:53.921 Per-Namespace SMART Log: Yes 00:10:53.921 Asymmetric Namespace Access Log Page: Not Supported 00:10:53.921 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:53.921 Command Effects Log Page: Supported 00:10:53.921 Get Log Page Extended Data: Supported 00:10:53.921 Telemetry Log Pages: Not Supported 00:10:53.921 Persistent Event Log Pages: Not Supported 00:10:53.921 Supported Log Pages Log Page: May Support 00:10:53.921 Commands Supported & Effects Log Page: Not Supported 00:10:53.921 Feature Identifiers & Effects Log Page:May Support 00:10:53.921 NVMe-MI Commands & Effects Log Page: May Support 00:10:53.921 Data Area 4 for Telemetry Log: Not Supported 00:10:53.921 Error Log Page Entries Supported: 1 00:10:53.921 Keep Alive: Not Supported 00:10:53.921 00:10:53.921 NVM Command Set Attributes 00:10:53.921 ========================== 00:10:53.921 Submission Queue Entry Size 00:10:53.921 Max: 64 00:10:53.921 Min: 64 00:10:53.921 Completion Queue Entry Size 00:10:53.921 Max: 16 00:10:53.921 Min: 16 00:10:53.921 Number of Namespaces: 256 00:10:53.921 Compare Command: Supported 00:10:53.921 Write Uncorrectable Command: Not Supported 00:10:53.921 Dataset Management Command: Supported 00:10:53.921 Write Zeroes Command: Supported 00:10:53.921 Set Features Save Field: Supported 00:10:53.921 Reservations: Not Supported 00:10:53.921 Timestamp: Supported 00:10:53.921 Copy: Supported 00:10:53.921 Volatile Write Cache: Present 00:10:53.921 Atomic Write Unit (Normal): 1 00:10:53.921 Atomic Write Unit (PFail): 1 00:10:53.921 Atomic Compare & Write Unit: 1 00:10:53.921 Fused Compare & Write: Not Supported 00:10:53.921 Scatter-Gather List 00:10:53.921 SGL Command Set: Supported 00:10:53.921 SGL Keyed: Not Supported 00:10:53.921 SGL Bit Bucket Descriptor: Not Supported 00:10:53.921 SGL Metadata Pointer: Not Supported 00:10:53.921 Oversized SGL: Not Supported 00:10:53.921 SGL Metadata Address: Not Supported 00:10:53.921 SGL Offset: Not Supported 00:10:53.921 Transport SGL Data Block: Not Supported 00:10:53.921 Replay Protected Memory Block: Not Supported 00:10:53.921 00:10:53.921 Firmware Slot Information 00:10:53.921 ========================= 00:10:53.921 Active slot: 1 00:10:53.921 Slot 1 Firmware Revision: 1.0 00:10:53.921 00:10:53.921 00:10:53.921 Commands Supported and Effects 00:10:53.921 ============================== 00:10:53.921 Admin Commands 00:10:53.922 -------------- 00:10:53.922 Delete I/O Submission Queue (00h): Supported 00:10:53.922 Create I/O Submission Queue (01h): Supported 00:10:53.922 Get Log Page (02h): Supported 00:10:53.922 Delete I/O Completion Queue (04h): Supported 00:10:53.922 Create I/O Completion Queue (05h): Supported 00:10:53.922 Identify (06h): Supported 00:10:53.922 Abort (08h): Supported 00:10:53.922 Set Features (09h): Supported 00:10:53.922 Get Features (0Ah): Supported 00:10:53.922 Asynchronous Event Request (0Ch): Supported 00:10:53.922 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:53.922 Directive Send (19h): Supported 00:10:53.922 Directive Receive (1Ah): Supported 00:10:53.922 Virtualization Management (1Ch): Supported 00:10:53.922 Doorbell Buffer Config (7Ch): Supported 00:10:53.922 Format NVM (80h): Supported LBA-Change 00:10:53.922 I/O Commands 00:10:53.922 ------------ 00:10:53.922 Flush (00h): Supported LBA-Change 00:10:53.922 Write (01h): Supported LBA-Change 00:10:53.922 Read (02h): Supported 00:10:53.922 Compare (05h): Supported 00:10:53.922 Write Zeroes (08h): Supported LBA-Change 00:10:53.922 Dataset Management (09h): Supported LBA-Change 00:10:53.922 Unknown (0Ch): Supported 00:10:53.922 Unknown (12h): Supported 00:10:53.922 Copy (19h): Supported LBA-Change 00:10:53.922 Unknown (1Dh): Supported LBA-Change 00:10:53.922 00:10:53.922 Error Log 00:10:53.922 ========= 00:10:53.922 00:10:53.922 Arbitration 00:10:53.922 =========== 00:10:53.922 Arbitration Burst: no limit 00:10:53.922 00:10:53.922 Power Management 00:10:53.922 ================ 00:10:53.922 Number of Power States: 1 00:10:53.922 Current Power State: Power State #0 00:10:53.922 Power State #0: 00:10:53.922 Max Power: 25.00 W 00:10:53.922 Non-Operational State: Operational 00:10:53.922 Entry Latency: 16 microseconds 00:10:53.922 Exit Latency: 4 microseconds 00:10:53.922 Relative Read Throughput: 0 00:10:53.922 Relative Read Latency: 0 00:10:53.922 Relative Write Throughput: 0 00:10:53.922 Relative Write Latency: 0 00:10:53.922 Idle Power: Not Reported 00:10:53.922 Active Power: Not Reported 00:10:53.922 Non-Operational Permissive Mode: Not Supported 00:10:53.922 00:10:53.922 Health Information 00:10:53.922 ================== 00:10:53.922 Critical Warnings: 00:10:53.922 Available Spare Space: OK 00:10:53.922 Temperature: [2024-07-25 11:35:52.664016] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 68585 terminated unexpected 00:10:53.922 OK 00:10:53.922 Device Reliability: OK 00:10:53.922 Read Only: No 00:10:53.922 Volatile Memory Backup: OK 00:10:53.922 Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.922 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:53.922 Available Spare: 0% 00:10:53.922 Available Spare Threshold: 0% 00:10:53.922 Life Percentage Used: 0% 00:10:53.922 Data Units Read: 1069 00:10:53.922 Data Units Written: 864 00:10:53.922 Host Read Commands: 47613 00:10:53.922 Host Write Commands: 44857 00:10:53.922 Controller Busy Time: 0 minutes 00:10:53.922 Power Cycles: 0 00:10:53.922 Power On Hours: 0 hours 00:10:53.922 Unsafe Shutdowns: 0 00:10:53.922 Unrecoverable Media Errors: 0 00:10:53.922 Lifetime Error Log Entries: 0 00:10:53.922 Warning Temperature Time: 0 minutes 00:10:53.922 Critical Temperature Time: 0 minutes 00:10:53.922 00:10:53.922 Number of Queues 00:10:53.922 ================ 00:10:53.922 Number of I/O Submission Queues: 64 00:10:53.922 Number of I/O Completion Queues: 64 00:10:53.922 00:10:53.922 ZNS Specific Controller Data 00:10:53.922 ============================ 00:10:53.922 Zone Append Size Limit: 0 00:10:53.922 00:10:53.922 00:10:53.922 Active Namespaces 00:10:53.922 ================= 00:10:53.922 Namespace ID:1 00:10:53.922 Error Recovery Timeout: Unlimited 00:10:53.922 Command Set Identifier: NVM (00h) 00:10:53.922 Deallocate: Supported 00:10:53.922 Deallocated/Unwritten Error: Supported 00:10:53.922 Deallocated Read Value: All 0x00 00:10:53.922 Deallocate in Write Zeroes: Not Supported 00:10:53.922 Deallocated Guard Field: 0xFFFF 00:10:53.922 Flush: Supported 00:10:53.922 Reservation: Not Supported 00:10:53.922 Namespace Sharing Capabilities: Private 00:10:53.922 Size (in LBAs): 1310720 (5GiB) 00:10:53.922 Capacity (in LBAs): 1310720 (5GiB) 00:10:53.922 Utilization (in LBAs): 1310720 (5GiB) 00:10:53.922 Thin Provisioning: Not Supported 00:10:53.922 Per-NS Atomic Units: No 00:10:53.922 Maximum Single Source Range Length: 128 00:10:53.922 Maximum Copy Length: 128 00:10:53.922 Maximum Source Range Count: 128 00:10:53.922 NGUID/EUI64 Never Reused: No 00:10:53.922 Namespace Write Protected: No 00:10:53.922 Number of LBA Formats: 8 00:10:53.922 Current LBA Format: LBA Format #04 00:10:53.922 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:53.922 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:53.922 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:53.922 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:53.922 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:53.922 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:53.922 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:53.922 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:53.922 00:10:53.922 NVM Specific Namespace Data 00:10:53.922 =========================== 00:10:53.922 Logical Block Storage Tag Mask: 0 00:10:53.922 Protection Information Capabilities: 00:10:53.922 16b Guard Protection Information Storage Tag Support: No 00:10:53.922 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:53.922 Storage Tag Check Read Support: No 00:10:53.922 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.922 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.922 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.922 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.922 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.922 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.922 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.922 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.922 ===================================================== 00:10:53.922 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:53.922 ===================================================== 00:10:53.922 Controller Capabilities/Features 00:10:53.922 ================================ 00:10:53.922 Vendor ID: 1b36 00:10:53.922 Subsystem Vendor ID: 1af4 00:10:53.922 Serial Number: 12343 00:10:53.922 Model Number: QEMU NVMe Ctrl 00:10:53.922 Firmware Version: 8.0.0 00:10:53.922 Recommended Arb Burst: 6 00:10:53.922 IEEE OUI Identifier: 00 54 52 00:10:53.922 Multi-path I/O 00:10:53.922 May have multiple subsystem ports: No 00:10:53.922 May have multiple controllers: Yes 00:10:53.922 Associated with SR-IOV VF: No 00:10:53.922 Max Data Transfer Size: 524288 00:10:53.922 Max Number of Namespaces: 256 00:10:53.922 Max Number of I/O Queues: 64 00:10:53.922 NVMe Specification Version (VS): 1.4 00:10:53.922 NVMe Specification Version (Identify): 1.4 00:10:53.922 Maximum Queue Entries: 2048 00:10:53.922 Contiguous Queues Required: Yes 00:10:53.922 Arbitration Mechanisms Supported 00:10:53.922 Weighted Round Robin: Not Supported 00:10:53.922 Vendor Specific: Not Supported 00:10:53.922 Reset Timeout: 7500 ms 00:10:53.922 Doorbell Stride: 4 bytes 00:10:53.922 NVM Subsystem Reset: Not Supported 00:10:53.922 Command Sets Supported 00:10:53.922 NVM Command Set: Supported 00:10:53.922 Boot Partition: Not Supported 00:10:53.922 Memory Page Size Minimum: 4096 bytes 00:10:53.922 Memory Page Size Maximum: 65536 bytes 00:10:53.922 Persistent Memory Region: Not Supported 00:10:53.922 Optional Asynchronous Events Supported 00:10:53.922 Namespace Attribute Notices: Supported 00:10:53.922 Firmware Activation Notices: Not Supported 00:10:53.922 ANA Change Notices: Not Supported 00:10:53.922 PLE Aggregate Log Change Notices: Not Supported 00:10:53.922 LBA Status Info Alert Notices: Not Supported 00:10:53.922 EGE Aggregate Log Change Notices: Not Supported 00:10:53.922 Normal NVM Subsystem Shutdown event: Not Supported 00:10:53.922 Zone Descriptor Change Notices: Not Supported 00:10:53.923 Discovery Log Change Notices: Not Supported 00:10:53.923 Controller Attributes 00:10:53.923 128-bit Host Identifier: Not Supported 00:10:53.923 Non-Operational Permissive Mode: Not Supported 00:10:53.923 NVM Sets: Not Supported 00:10:53.923 Read Recovery Levels: Not Supported 00:10:53.923 Endurance Groups: Supported 00:10:53.923 Predictable Latency Mode: Not Supported 00:10:53.923 Traffic Based Keep ALive: Not Supported 00:10:53.923 Namespace Granularity: Not Supported 00:10:53.923 SQ Associations: Not Supported 00:10:53.923 UUID List: Not Supported 00:10:53.923 Multi-Domain Subsystem: Not Supported 00:10:53.923 Fixed Capacity Management: Not Supported 00:10:53.923 Variable Capacity Management: Not Supported 00:10:53.923 Delete Endurance Group: Not Supported 00:10:53.923 Delete NVM Set: Not Supported 00:10:53.923 Extended LBA Formats Supported: Supported 00:10:53.923 Flexible Data Placement Supported: Supported 00:10:53.923 00:10:53.923 Controller Memory Buffer Support 00:10:53.923 ================================ 00:10:53.923 Supported: No 00:10:53.923 00:10:53.923 Persistent Memory Region Support 00:10:53.923 ================================ 00:10:53.923 Supported: No 00:10:53.923 00:10:53.923 Admin Command Set Attributes 00:10:53.923 ============================ 00:10:53.923 Security Send/Receive: Not Supported 00:10:53.923 Format NVM: Supported 00:10:53.923 Firmware Activate/Download: Not Supported 00:10:53.923 Namespace Management: Supported 00:10:53.923 Device Self-Test: Not Supported 00:10:53.923 Directives: Supported 00:10:53.923 NVMe-MI: Not Supported 00:10:53.923 Virtualization Management: Not Supported 00:10:53.923 Doorbell Buffer Config: Supported 00:10:53.923 Get LBA Status Capability: Not Supported 00:10:53.923 Command & Feature Lockdown Capability: Not Supported 00:10:53.923 Abort Command Limit: 4 00:10:53.923 Async Event Request Limit: 4 00:10:53.923 Number of Firmware Slots: N/A 00:10:53.923 Firmware Slot 1 Read-Only: N/A 00:10:53.923 Firmware Activation Without Reset: N/A 00:10:53.923 Multiple Update Detection Support: N/A 00:10:53.923 Firmware Update Granularity: No Information Provided 00:10:53.923 Per-Namespace SMART Log: Yes 00:10:53.923 Asymmetric Namespace Access Log Page: Not Supported 00:10:53.923 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:53.923 Command Effects Log Page: Supported 00:10:53.923 Get Log Page Extended Data: Supported 00:10:53.923 Telemetry Log Pages: Not Supported 00:10:53.923 Persistent Event Log Pages: Not Supported 00:10:53.923 Supported Log Pages Log Page: May Support 00:10:53.923 Commands Supported & Effects Log Page: Not Supported 00:10:53.923 Feature Identifiers & Effects Log Page:May Support 00:10:53.923 NVMe-MI Commands & Effects Log Page: May Support 00:10:53.923 Data Area 4 for Telemetry Log: Not Supported 00:10:53.923 Error Log Page Entries Supported: 1 00:10:53.923 Keep Alive: Not Supported 00:10:53.923 00:10:53.923 NVM Command Set Attributes 00:10:53.923 ========================== 00:10:53.923 Submission Queue Entry Size 00:10:53.923 Max: 64 00:10:53.923 Min: 64 00:10:53.923 Completion Queue Entry Size 00:10:53.923 Max: 16 00:10:53.923 Min: 16 00:10:53.923 Number of Namespaces: 256 00:10:53.923 Compare Command: Supported 00:10:53.923 Write Uncorrectable Command: Not Supported 00:10:53.923 Dataset Management Command: Supported 00:10:53.923 Write Zeroes Command: Supported 00:10:53.923 Set Features Save Field: Supported 00:10:53.923 Reservations: Not Supported 00:10:53.923 Timestamp: Supported 00:10:53.923 Copy: Supported 00:10:53.923 Volatile Write Cache: Present 00:10:53.923 Atomic Write Unit (Normal): 1 00:10:53.923 Atomic Write Unit (PFail): 1 00:10:53.923 Atomic Compare & Write Unit: 1 00:10:53.923 Fused Compare & Write: Not Supported 00:10:53.923 Scatter-Gather List 00:10:53.923 SGL Command Set: Supported 00:10:53.923 SGL Keyed: Not Supported 00:10:53.923 SGL Bit Bucket Descriptor: Not Supported 00:10:53.923 SGL Metadata Pointer: Not Supported 00:10:53.923 Oversized SGL: Not Supported 00:10:53.923 SGL Metadata Address: Not Supported 00:10:53.923 SGL Offset: Not Supported 00:10:53.923 Transport SGL Data Block: Not Supported 00:10:53.923 Replay Protected Memory Block: Not Supported 00:10:53.923 00:10:53.923 Firmware Slot Information 00:10:53.923 ========================= 00:10:53.923 Active slot: 1 00:10:53.923 Slot 1 Firmware Revision: 1.0 00:10:53.923 00:10:53.923 00:10:53.923 Commands Supported and Effects 00:10:53.923 ============================== 00:10:53.923 Admin Commands 00:10:53.923 -------------- 00:10:53.923 Delete I/O Submission Queue (00h): Supported 00:10:53.923 Create I/O Submission Queue (01h): Supported 00:10:53.923 Get Log Page (02h): Supported 00:10:53.923 Delete I/O Completion Queue (04h): Supported 00:10:53.923 Create I/O Completion Queue (05h): Supported 00:10:53.923 Identify (06h): Supported 00:10:53.923 Abort (08h): Supported 00:10:53.923 Set Features (09h): Supported 00:10:53.923 Get Features (0Ah): Supported 00:10:53.923 Asynchronous Event Request (0Ch): Supported 00:10:53.923 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:53.923 Directive Send (19h): Supported 00:10:53.923 Directive Receive (1Ah): Supported 00:10:53.923 Virtualization Management (1Ch): Supported 00:10:53.923 Doorbell Buffer Config (7Ch): Supported 00:10:53.923 Format NVM (80h): Supported LBA-Change 00:10:53.923 I/O Commands 00:10:53.923 ------------ 00:10:53.923 Flush (00h): Supported LBA-Change 00:10:53.923 Write (01h): Supported LBA-Change 00:10:53.923 Read (02h): Supported 00:10:53.923 Compare (05h): Supported 00:10:53.923 Write Zeroes (08h): Supported LBA-Change 00:10:53.923 Dataset Management (09h): Supported LBA-Change 00:10:53.923 Unknown (0Ch): Supported 00:10:53.923 Unknown (12h): Supported 00:10:53.923 Copy (19h): Supported LBA-Change 00:10:53.923 Unknown (1Dh): Supported LBA-Change 00:10:53.923 00:10:53.923 Error Log 00:10:53.923 ========= 00:10:53.923 00:10:53.923 Arbitration 00:10:53.923 =========== 00:10:53.923 Arbitration Burst: no limit 00:10:53.923 00:10:53.923 Power Management 00:10:53.923 ================ 00:10:53.923 Number of Power States: 1 00:10:53.923 Current Power State: Power State #0 00:10:53.923 Power State #0: 00:10:53.923 Max Power: 25.00 W 00:10:53.923 Non-Operational State: Operational 00:10:53.923 Entry Latency: 16 microseconds 00:10:53.923 Exit Latency: 4 microseconds 00:10:53.923 Relative Read Throughput: 0 00:10:53.923 Relative Read Latency: 0 00:10:53.923 Relative Write Throughput: 0 00:10:53.923 Relative Write Latency: 0 00:10:53.923 Idle Power: Not Reported 00:10:53.923 Active Power: Not Reported 00:10:53.923 Non-Operational Permissive Mode: Not Supported 00:10:53.923 00:10:53.923 Health Information 00:10:53.923 ================== 00:10:53.923 Critical Warnings: 00:10:53.923 Available Spare Space: OK 00:10:53.923 Temperature: OK 00:10:53.923 Device Reliability: OK 00:10:53.923 Read Only: No 00:10:53.923 Volatile Memory Backup: OK 00:10:53.923 Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.923 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:53.923 Available Spare: 0% 00:10:53.923 Available Spare Threshold: 0% 00:10:53.923 Life Percentage Used: 0% 00:10:53.923 Data Units Read: 777 00:10:53.923 Data Units Written: 670 00:10:53.923 Host Read Commands: 33040 00:10:53.923 Host Write Commands: 31630 00:10:53.923 Controller Busy Time: 0 minutes 00:10:53.923 Power Cycles: 0 00:10:53.923 Power On Hours: 0 hours 00:10:53.923 Unsafe Shutdowns: 0 00:10:53.923 Unrecoverable Media Errors: 0 00:10:53.923 Lifetime Error Log Entries: 0 00:10:53.923 Warning Temperature Time: 0 minutes 00:10:53.923 Critical Temperature Time: 0 minutes 00:10:53.923 00:10:53.923 Number of Queues 00:10:53.923 ================ 00:10:53.923 Number of I/O Submission Queues: 64 00:10:53.923 Number of I/O Completion Queues: 64 00:10:53.923 00:10:53.923 ZNS Specific Controller Data 00:10:53.923 ============================ 00:10:53.923 Zone Append Size Limit: 0 00:10:53.923 00:10:53.923 00:10:53.923 Active Namespaces 00:10:53.923 ================= 00:10:53.924 Namespace ID:1 00:10:53.924 Error Recovery Timeout: Unlimited 00:10:53.924 Command Set Identifier: NVM (00h) 00:10:53.924 Deallocate: Supported 00:10:53.924 Deallocated/Unwritten Error: Supported 00:10:53.924 Deallocated Read Value: All 0x00 00:10:53.924 Deallocate in Write Zeroes: Not Supported 00:10:53.924 Deallocated Guard Field: 0xFFFF 00:10:53.924 Flush: Supported 00:10:53.924 Reservation: Not Supported 00:10:53.924 Namespace Sharing Capabilities: Multiple Controllers 00:10:53.924 Size (in LBAs): 262144 (1GiB) 00:10:53.924 Capacity (in LBAs): 262144 (1GiB) 00:10:53.924 Utilization (in LBAs): 262144 (1GiB) 00:10:53.924 Thin Provisioning: Not Supported 00:10:53.924 Per-NS Atomic Units: No 00:10:53.924 Maximum Single Source Range Length: 128 00:10:53.924 Maximum Copy Length: 128 00:10:53.924 Maximum Source Range Count: 128 00:10:53.924 NGUID/EUI64 Never Reused: No 00:10:53.924 Namespace Write Protected: No 00:10:53.924 Endurance group ID: 1 00:10:53.924 Number of LBA Formats: 8 00:10:53.924 Current LBA Format: LBA Format #04 00:10:53.924 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:53.924 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:53.924 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:53.924 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:53.924 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:53.924 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:53.924 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:53.924 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:53.924 00:10:53.924 Get Feature FDP: 00:10:53.924 ================ 00:10:53.924 Enabled: Yes 00:10:53.924 FDP configuration index: 0 00:10:53.924 00:10:53.924 FDP configurations log page 00:10:53.924 =========================== 00:10:53.924 Number of FDP configurations: 1 00:10:53.924 Version: 0 00:10:53.924 Size: 112 00:10:53.924 FDP Configuration Descriptor: 0 00:10:53.924 Descriptor Size: 96 00:10:53.924 Reclaim Group Identifier format: 2 00:10:53.924 FDP Volatile Write Cache: Not Present 00:10:53.924 FDP Configuration: Valid 00:10:53.924 Vendor Specific Size: 0 00:10:53.924 Number of Reclaim Groups: 2 00:10:53.924 Number of Recalim Unit Handles: 8 00:10:53.924 Max Placement Identifiers: 128 00:10:53.924 Number of Namespaces Suppprted: 256 00:10:53.924 Reclaim unit Nominal Size: 6000000 bytes 00:10:53.924 Estimated Reclaim Unit Time Limit: Not Reported 00:10:53.924 RUH Desc #000: RUH Type: Initially Isolated 00:10:53.924 RUH Desc #001: RUH Type: Initially Isolated 00:10:53.924 RUH Desc #002: RUH Type: Initially Isolated 00:10:53.924 RUH Desc #003: RUH Type: Initially Isolated 00:10:53.924 RUH Desc #004: RUH Type: Initially Isolated 00:10:53.924 RUH Desc #005: RUH Type: Initially Isolated 00:10:53.924 RUH Desc #006: RUH Type: Initially Isolated 00:10:53.924 RUH Desc #007: RUH Type: Initially Isolated 00:10:53.924 00:10:53.924 FDP reclaim unit handle usage log page 00:10:53.924 ====================================== 00:10:53.924 Number of Reclaim Unit Handles: 8 00:10:53.924 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:53.924 RUH Usage Desc #001: RUH Attributes: Unused 00:10:53.924 RUH Usage Desc #002: RUH Attributes: Unused 00:10:53.924 RUH Usage Desc #003: RUH Attributes: Unused 00:10:53.924 RUH Usage Desc #004: RUH Attributes: Unused 00:10:53.924 RUH Usage Desc #005: RUH Attributes: Unused 00:10:53.924 RUH Usage Desc #006: RUH Attributes: Unused 00:10:53.924 RUH Usage Desc #007: RUH Attributes: Unused 00:10:53.924 00:10:53.924 FDP statistics log page 00:10:53.924 ======================= 00:10:53.924 Host bytes with metadata written: 421502976 00:10:53.924 Medi[2024-07-25 11:35:52.665940] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 68585 terminated unexpected 00:10:53.924 a bytes with metadata written: 421548032 00:10:53.924 Media bytes erased: 0 00:10:53.924 00:10:53.924 FDP events log page 00:10:53.924 =================== 00:10:53.924 Number of FDP events: 0 00:10:53.924 00:10:53.924 NVM Specific Namespace Data 00:10:53.924 =========================== 00:10:53.924 Logical Block Storage Tag Mask: 0 00:10:53.924 Protection Information Capabilities: 00:10:53.924 16b Guard Protection Information Storage Tag Support: No 00:10:53.924 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:53.924 Storage Tag Check Read Support: No 00:10:53.924 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.924 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.924 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.924 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.924 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.924 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.924 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.924 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.924 ===================================================== 00:10:53.924 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:53.924 ===================================================== 00:10:53.924 Controller Capabilities/Features 00:10:53.924 ================================ 00:10:53.924 Vendor ID: 1b36 00:10:53.924 Subsystem Vendor ID: 1af4 00:10:53.924 Serial Number: 12342 00:10:53.924 Model Number: QEMU NVMe Ctrl 00:10:53.924 Firmware Version: 8.0.0 00:10:53.924 Recommended Arb Burst: 6 00:10:53.924 IEEE OUI Identifier: 00 54 52 00:10:53.924 Multi-path I/O 00:10:53.924 May have multiple subsystem ports: No 00:10:53.924 May have multiple controllers: No 00:10:53.924 Associated with SR-IOV VF: No 00:10:53.924 Max Data Transfer Size: 524288 00:10:53.924 Max Number of Namespaces: 256 00:10:53.924 Max Number of I/O Queues: 64 00:10:53.924 NVMe Specification Version (VS): 1.4 00:10:53.924 NVMe Specification Version (Identify): 1.4 00:10:53.924 Maximum Queue Entries: 2048 00:10:53.924 Contiguous Queues Required: Yes 00:10:53.924 Arbitration Mechanisms Supported 00:10:53.924 Weighted Round Robin: Not Supported 00:10:53.924 Vendor Specific: Not Supported 00:10:53.924 Reset Timeout: 7500 ms 00:10:53.924 Doorbell Stride: 4 bytes 00:10:53.924 NVM Subsystem Reset: Not Supported 00:10:53.924 Command Sets Supported 00:10:53.924 NVM Command Set: Supported 00:10:53.924 Boot Partition: Not Supported 00:10:53.924 Memory Page Size Minimum: 4096 bytes 00:10:53.924 Memory Page Size Maximum: 65536 bytes 00:10:53.924 Persistent Memory Region: Not Supported 00:10:53.924 Optional Asynchronous Events Supported 00:10:53.924 Namespace Attribute Notices: Supported 00:10:53.924 Firmware Activation Notices: Not Supported 00:10:53.925 ANA Change Notices: Not Supported 00:10:53.925 PLE Aggregate Log Change Notices: Not Supported 00:10:53.925 LBA Status Info Alert Notices: Not Supported 00:10:53.925 EGE Aggregate Log Change Notices: Not Supported 00:10:53.925 Normal NVM Subsystem Shutdown event: Not Supported 00:10:53.925 Zone Descriptor Change Notices: Not Supported 00:10:53.925 Discovery Log Change Notices: Not Supported 00:10:53.925 Controller Attributes 00:10:53.925 128-bit Host Identifier: Not Supported 00:10:53.925 Non-Operational Permissive Mode: Not Supported 00:10:53.925 NVM Sets: Not Supported 00:10:53.925 Read Recovery Levels: Not Supported 00:10:53.925 Endurance Groups: Not Supported 00:10:53.925 Predictable Latency Mode: Not Supported 00:10:53.925 Traffic Based Keep ALive: Not Supported 00:10:53.925 Namespace Granularity: Not Supported 00:10:53.925 SQ Associations: Not Supported 00:10:53.925 UUID List: Not Supported 00:10:53.925 Multi-Domain Subsystem: Not Supported 00:10:53.925 Fixed Capacity Management: Not Supported 00:10:53.925 Variable Capacity Management: Not Supported 00:10:53.925 Delete Endurance Group: Not Supported 00:10:53.925 Delete NVM Set: Not Supported 00:10:53.925 Extended LBA Formats Supported: Supported 00:10:53.925 Flexible Data Placement Supported: Not Supported 00:10:53.925 00:10:53.925 Controller Memory Buffer Support 00:10:53.925 ================================ 00:10:53.925 Supported: No 00:10:53.925 00:10:53.925 Persistent Memory Region Support 00:10:53.925 ================================ 00:10:53.925 Supported: No 00:10:53.925 00:10:53.925 Admin Command Set Attributes 00:10:53.925 ============================ 00:10:53.925 Security Send/Receive: Not Supported 00:10:53.925 Format NVM: Supported 00:10:53.925 Firmware Activate/Download: Not Supported 00:10:53.925 Namespace Management: Supported 00:10:53.925 Device Self-Test: Not Supported 00:10:53.925 Directives: Supported 00:10:53.925 NVMe-MI: Not Supported 00:10:53.925 Virtualization Management: Not Supported 00:10:53.925 Doorbell Buffer Config: Supported 00:10:53.925 Get LBA Status Capability: Not Supported 00:10:53.925 Command & Feature Lockdown Capability: Not Supported 00:10:53.925 Abort Command Limit: 4 00:10:53.925 Async Event Request Limit: 4 00:10:53.925 Number of Firmware Slots: N/A 00:10:53.925 Firmware Slot 1 Read-Only: N/A 00:10:53.925 Firmware Activation Without Reset: N/A 00:10:53.925 Multiple Update Detection Support: N/A 00:10:53.925 Firmware Update Granularity: No Information Provided 00:10:53.925 Per-Namespace SMART Log: Yes 00:10:53.925 Asymmetric Namespace Access Log Page: Not Supported 00:10:53.925 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:53.925 Command Effects Log Page: Supported 00:10:53.925 Get Log Page Extended Data: Supported 00:10:53.925 Telemetry Log Pages: Not Supported 00:10:53.925 Persistent Event Log Pages: Not Supported 00:10:53.925 Supported Log Pages Log Page: May Support 00:10:53.925 Commands Supported & Effects Log Page: Not Supported 00:10:53.925 Feature Identifiers & Effects Log Page:May Support 00:10:53.925 NVMe-MI Commands & Effects Log Page: May Support 00:10:53.925 Data Area 4 for Telemetry Log: Not Supported 00:10:53.925 Error Log Page Entries Supported: 1 00:10:53.925 Keep Alive: Not Supported 00:10:53.925 00:10:53.925 NVM Command Set Attributes 00:10:53.925 ========================== 00:10:53.925 Submission Queue Entry Size 00:10:53.925 Max: 64 00:10:53.925 Min: 64 00:10:53.925 Completion Queue Entry Size 00:10:53.925 Max: 16 00:10:53.925 Min: 16 00:10:53.925 Number of Namespaces: 256 00:10:53.925 Compare Command: Supported 00:10:53.925 Write Uncorrectable Command: Not Supported 00:10:53.925 Dataset Management Command: Supported 00:10:53.925 Write Zeroes Command: Supported 00:10:53.925 Set Features Save Field: Supported 00:10:53.925 Reservations: Not Supported 00:10:53.925 Timestamp: Supported 00:10:53.925 Copy: Supported 00:10:53.925 Volatile Write Cache: Present 00:10:53.925 Atomic Write Unit (Normal): 1 00:10:53.925 Atomic Write Unit (PFail): 1 00:10:53.925 Atomic Compare & Write Unit: 1 00:10:53.925 Fused Compare & Write: Not Supported 00:10:53.925 Scatter-Gather List 00:10:53.925 SGL Command Set: Supported 00:10:53.925 SGL Keyed: Not Supported 00:10:53.925 SGL Bit Bucket Descriptor: Not Supported 00:10:53.925 SGL Metadata Pointer: Not Supported 00:10:53.925 Oversized SGL: Not Supported 00:10:53.925 SGL Metadata Address: Not Supported 00:10:53.925 SGL Offset: Not Supported 00:10:53.925 Transport SGL Data Block: Not Supported 00:10:53.925 Replay Protected Memory Block: Not Supported 00:10:53.925 00:10:53.925 Firmware Slot Information 00:10:53.925 ========================= 00:10:53.925 Active slot: 1 00:10:53.925 Slot 1 Firmware Revision: 1.0 00:10:53.925 00:10:53.925 00:10:53.925 Commands Supported and Effects 00:10:53.925 ============================== 00:10:53.925 Admin Commands 00:10:53.925 -------------- 00:10:53.925 Delete I/O Submission Queue (00h): Supported 00:10:53.925 Create I/O Submission Queue (01h): Supported 00:10:53.925 Get Log Page (02h): Supported 00:10:53.925 Delete I/O Completion Queue (04h): Supported 00:10:53.925 Create I/O Completion Queue (05h): Supported 00:10:53.925 Identify (06h): Supported 00:10:53.925 Abort (08h): Supported 00:10:53.925 Set Features (09h): Supported 00:10:53.925 Get Features (0Ah): Supported 00:10:53.925 Asynchronous Event Request (0Ch): Supported 00:10:53.925 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:53.925 Directive Send (19h): Supported 00:10:53.925 Directive Receive (1Ah): Supported 00:10:53.925 Virtualization Management (1Ch): Supported 00:10:53.925 Doorbell Buffer Config (7Ch): Supported 00:10:53.925 Format NVM (80h): Supported LBA-Change 00:10:53.925 I/O Commands 00:10:53.925 ------------ 00:10:53.925 Flush (00h): Supported LBA-Change 00:10:53.925 Write (01h): Supported LBA-Change 00:10:53.925 Read (02h): Supported 00:10:53.925 Compare (05h): Supported 00:10:53.925 Write Zeroes (08h): Supported LBA-Change 00:10:53.925 Dataset Management (09h): Supported LBA-Change 00:10:53.925 Unknown (0Ch): Supported 00:10:53.925 Unknown (12h): Supported 00:10:53.925 Copy (19h): Supported LBA-Change 00:10:53.925 Unknown (1Dh): Supported LBA-Change 00:10:53.925 00:10:53.925 Error Log 00:10:53.925 ========= 00:10:53.925 00:10:53.925 Arbitration 00:10:53.925 =========== 00:10:53.925 Arbitration Burst: no limit 00:10:53.925 00:10:53.925 Power Management 00:10:53.925 ================ 00:10:53.925 Number of Power States: 1 00:10:53.925 Current Power State: Power State #0 00:10:53.925 Power State #0: 00:10:53.925 Max Power: 25.00 W 00:10:53.925 Non-Operational State: Operational 00:10:53.925 Entry Latency: 16 microseconds 00:10:53.925 Exit Latency: 4 microseconds 00:10:53.925 Relative Read Throughput: 0 00:10:53.925 Relative Read Latency: 0 00:10:53.925 Relative Write Throughput: 0 00:10:53.925 Relative Write Latency: 0 00:10:53.925 Idle Power: Not Reported 00:10:53.925 Active Power: Not Reported 00:10:53.925 Non-Operational Permissive Mode: Not Supported 00:10:53.925 00:10:53.925 Health Information 00:10:53.925 ================== 00:10:53.925 Critical Warnings: 00:10:53.925 Available Spare Space: OK 00:10:53.925 Temperature: OK 00:10:53.925 Device Reliability: OK 00:10:53.925 Read Only: No 00:10:53.925 Volatile Memory Backup: OK 00:10:53.925 Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.925 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:53.925 Available Spare: 0% 00:10:53.925 Available Spare Threshold: 0% 00:10:53.925 Life Percentage Used: 0% 00:10:53.925 Data Units Read: 2174 00:10:53.925 Data Units Written: 1854 00:10:53.925 Host Read Commands: 97848 00:10:53.925 Host Write Commands: 93618 00:10:53.925 Controller Busy Time: 0 minutes 00:10:53.925 Power Cycles: 0 00:10:53.925 Power On Hours: 0 hours 00:10:53.925 Unsafe Shutdowns: 0 00:10:53.925 Unrecoverable Media Errors: 0 00:10:53.925 Lifetime Error Log Entries: 0 00:10:53.925 Warning Temperature Time: 0 minutes 00:10:53.925 Critical Temperature Time: 0 minutes 00:10:53.925 00:10:53.925 Number of Queues 00:10:53.925 ================ 00:10:53.925 Number of I/O Submission Queues: 64 00:10:53.925 Number of I/O Completion Queues: 64 00:10:53.926 00:10:53.926 ZNS Specific Controller Data 00:10:53.926 ============================ 00:10:53.926 Zone Append Size Limit: 0 00:10:53.926 00:10:53.926 00:10:53.926 Active Namespaces 00:10:53.926 ================= 00:10:53.926 Namespace ID:1 00:10:53.926 Error Recovery Timeout: Unlimited 00:10:53.926 Command Set Identifier: NVM (00h) 00:10:53.926 Deallocate: Supported 00:10:53.926 Deallocated/Unwritten Error: Supported 00:10:53.926 Deallocated Read Value: All 0x00 00:10:53.926 Deallocate in Write Zeroes: Not Supported 00:10:53.926 Deallocated Guard Field: 0xFFFF 00:10:53.926 Flush: Supported 00:10:53.926 Reservation: Not Supported 00:10:53.926 Namespace Sharing Capabilities: Private 00:10:53.926 Size (in LBAs): 1048576 (4GiB) 00:10:53.926 Capacity (in LBAs): 1048576 (4GiB) 00:10:53.926 Utilization (in LBAs): 1048576 (4GiB) 00:10:53.926 Thin Provisioning: Not Supported 00:10:53.926 Per-NS Atomic Units: No 00:10:53.926 Maximum Single Source Range Length: 128 00:10:53.926 Maximum Copy Length: 128 00:10:53.926 Maximum Source Range Count: 128 00:10:53.926 NGUID/EUI64 Never Reused: No 00:10:53.926 Namespace Write Protected: No 00:10:53.926 Number of LBA Formats: 8 00:10:53.926 Current LBA Format: LBA Format #04 00:10:53.926 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:53.926 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:53.926 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:53.926 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:53.926 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:53.926 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:53.926 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:53.926 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:53.926 00:10:53.926 NVM Specific Namespace Data 00:10:53.926 =========================== 00:10:53.926 Logical Block Storage Tag Mask: 0 00:10:53.926 Protection Information Capabilities: 00:10:53.926 16b Guard Protection Information Storage Tag Support: No 00:10:53.926 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:53.926 Storage Tag Check Read Support: No 00:10:53.926 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Namespace ID:2 00:10:53.926 Error Recovery Timeout: Unlimited 00:10:53.926 Command Set Identifier: NVM (00h) 00:10:53.926 Deallocate: Supported 00:10:53.926 Deallocated/Unwritten Error: Supported 00:10:53.926 Deallocated Read Value: All 0x00 00:10:53.926 Deallocate in Write Zeroes: Not Supported 00:10:53.926 Deallocated Guard Field: 0xFFFF 00:10:53.926 Flush: Supported 00:10:53.926 Reservation: Not Supported 00:10:53.926 Namespace Sharing Capabilities: Private 00:10:53.926 Size (in LBAs): 1048576 (4GiB) 00:10:53.926 Capacity (in LBAs): 1048576 (4GiB) 00:10:53.926 Utilization (in LBAs): 1048576 (4GiB) 00:10:53.926 Thin Provisioning: Not Supported 00:10:53.926 Per-NS Atomic Units: No 00:10:53.926 Maximum Single Source Range Length: 128 00:10:53.926 Maximum Copy Length: 128 00:10:53.926 Maximum Source Range Count: 128 00:10:53.926 NGUID/EUI64 Never Reused: No 00:10:53.926 Namespace Write Protected: No 00:10:53.926 Number of LBA Formats: 8 00:10:53.926 Current LBA Format: LBA Format #04 00:10:53.926 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:53.926 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:53.926 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:53.926 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:53.926 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:53.926 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:53.926 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:53.926 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:53.926 00:10:53.926 NVM Specific Namespace Data 00:10:53.926 =========================== 00:10:53.926 Logical Block Storage Tag Mask: 0 00:10:53.926 Protection Information Capabilities: 00:10:53.926 16b Guard Protection Information Storage Tag Support: No 00:10:53.926 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:53.926 Storage Tag Check Read Support: No 00:10:53.926 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Namespace ID:3 00:10:53.926 Error Recovery Timeout: Unlimited 00:10:53.926 Command Set Identifier: NVM (00h) 00:10:53.926 Deallocate: Supported 00:10:53.926 Deallocated/Unwritten Error: Supported 00:10:53.926 Deallocated Read Value: All 0x00 00:10:53.926 Deallocate in Write Zeroes: Not Supported 00:10:53.926 Deallocated Guard Field: 0xFFFF 00:10:53.926 Flush: Supported 00:10:53.926 Reservation: Not Supported 00:10:53.926 Namespace Sharing Capabilities: Private 00:10:53.926 Size (in LBAs): 1048576 (4GiB) 00:10:53.926 Capacity (in LBAs): 1048576 (4GiB) 00:10:53.926 Utilization (in LBAs): 1048576 (4GiB) 00:10:53.926 Thin Provisioning: Not Supported 00:10:53.926 Per-NS Atomic Units: No 00:10:53.926 Maximum Single Source Range Length: 128 00:10:53.926 Maximum Copy Length: 128 00:10:53.926 Maximum Source Range Count: 128 00:10:53.926 NGUID/EUI64 Never Reused: No 00:10:53.926 Namespace Write Protected: No 00:10:53.926 Number of LBA Formats: 8 00:10:53.926 Current LBA Format: LBA Format #04 00:10:53.926 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:53.926 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:53.926 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:53.926 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:53.926 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:53.926 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:53.926 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:53.926 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:53.926 00:10:53.926 NVM Specific Namespace Data 00:10:53.926 =========================== 00:10:53.926 Logical Block Storage Tag Mask: 0 00:10:53.926 Protection Information Capabilities: 00:10:53.926 16b Guard Protection Information Storage Tag Support: No 00:10:53.926 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:53.926 Storage Tag Check Read Support: No 00:10:53.926 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:53.926 11:35:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:53.926 11:35:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:54.185 ===================================================== 00:10:54.185 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:54.185 ===================================================== 00:10:54.185 Controller Capabilities/Features 00:10:54.185 ================================ 00:10:54.185 Vendor ID: 1b36 00:10:54.185 Subsystem Vendor ID: 1af4 00:10:54.185 Serial Number: 12340 00:10:54.185 Model Number: QEMU NVMe Ctrl 00:10:54.185 Firmware Version: 8.0.0 00:10:54.185 Recommended Arb Burst: 6 00:10:54.185 IEEE OUI Identifier: 00 54 52 00:10:54.185 Multi-path I/O 00:10:54.185 May have multiple subsystem ports: No 00:10:54.185 May have multiple controllers: No 00:10:54.185 Associated with SR-IOV VF: No 00:10:54.185 Max Data Transfer Size: 524288 00:10:54.185 Max Number of Namespaces: 256 00:10:54.185 Max Number of I/O Queues: 64 00:10:54.185 NVMe Specification Version (VS): 1.4 00:10:54.185 NVMe Specification Version (Identify): 1.4 00:10:54.185 Maximum Queue Entries: 2048 00:10:54.185 Contiguous Queues Required: Yes 00:10:54.185 Arbitration Mechanisms Supported 00:10:54.185 Weighted Round Robin: Not Supported 00:10:54.185 Vendor Specific: Not Supported 00:10:54.185 Reset Timeout: 7500 ms 00:10:54.185 Doorbell Stride: 4 bytes 00:10:54.185 NVM Subsystem Reset: Not Supported 00:10:54.185 Command Sets Supported 00:10:54.185 NVM Command Set: Supported 00:10:54.185 Boot Partition: Not Supported 00:10:54.185 Memory Page Size Minimum: 4096 bytes 00:10:54.185 Memory Page Size Maximum: 65536 bytes 00:10:54.185 Persistent Memory Region: Not Supported 00:10:54.185 Optional Asynchronous Events Supported 00:10:54.185 Namespace Attribute Notices: Supported 00:10:54.185 Firmware Activation Notices: Not Supported 00:10:54.185 ANA Change Notices: Not Supported 00:10:54.185 PLE Aggregate Log Change Notices: Not Supported 00:10:54.185 LBA Status Info Alert Notices: Not Supported 00:10:54.185 EGE Aggregate Log Change Notices: Not Supported 00:10:54.185 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.185 Zone Descriptor Change Notices: Not Supported 00:10:54.185 Discovery Log Change Notices: Not Supported 00:10:54.185 Controller Attributes 00:10:54.185 128-bit Host Identifier: Not Supported 00:10:54.185 Non-Operational Permissive Mode: Not Supported 00:10:54.185 NVM Sets: Not Supported 00:10:54.185 Read Recovery Levels: Not Supported 00:10:54.185 Endurance Groups: Not Supported 00:10:54.185 Predictable Latency Mode: Not Supported 00:10:54.185 Traffic Based Keep ALive: Not Supported 00:10:54.185 Namespace Granularity: Not Supported 00:10:54.185 SQ Associations: Not Supported 00:10:54.185 UUID List: Not Supported 00:10:54.185 Multi-Domain Subsystem: Not Supported 00:10:54.185 Fixed Capacity Management: Not Supported 00:10:54.185 Variable Capacity Management: Not Supported 00:10:54.185 Delete Endurance Group: Not Supported 00:10:54.185 Delete NVM Set: Not Supported 00:10:54.185 Extended LBA Formats Supported: Supported 00:10:54.185 Flexible Data Placement Supported: Not Supported 00:10:54.185 00:10:54.185 Controller Memory Buffer Support 00:10:54.185 ================================ 00:10:54.185 Supported: No 00:10:54.185 00:10:54.185 Persistent Memory Region Support 00:10:54.185 ================================ 00:10:54.185 Supported: No 00:10:54.185 00:10:54.185 Admin Command Set Attributes 00:10:54.185 ============================ 00:10:54.185 Security Send/Receive: Not Supported 00:10:54.185 Format NVM: Supported 00:10:54.185 Firmware Activate/Download: Not Supported 00:10:54.185 Namespace Management: Supported 00:10:54.185 Device Self-Test: Not Supported 00:10:54.185 Directives: Supported 00:10:54.185 NVMe-MI: Not Supported 00:10:54.185 Virtualization Management: Not Supported 00:10:54.185 Doorbell Buffer Config: Supported 00:10:54.185 Get LBA Status Capability: Not Supported 00:10:54.185 Command & Feature Lockdown Capability: Not Supported 00:10:54.185 Abort Command Limit: 4 00:10:54.185 Async Event Request Limit: 4 00:10:54.185 Number of Firmware Slots: N/A 00:10:54.185 Firmware Slot 1 Read-Only: N/A 00:10:54.185 Firmware Activation Without Reset: N/A 00:10:54.185 Multiple Update Detection Support: N/A 00:10:54.185 Firmware Update Granularity: No Information Provided 00:10:54.185 Per-Namespace SMART Log: Yes 00:10:54.185 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.185 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:54.185 Command Effects Log Page: Supported 00:10:54.185 Get Log Page Extended Data: Supported 00:10:54.185 Telemetry Log Pages: Not Supported 00:10:54.185 Persistent Event Log Pages: Not Supported 00:10:54.185 Supported Log Pages Log Page: May Support 00:10:54.185 Commands Supported & Effects Log Page: Not Supported 00:10:54.185 Feature Identifiers & Effects Log Page:May Support 00:10:54.185 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.185 Data Area 4 for Telemetry Log: Not Supported 00:10:54.185 Error Log Page Entries Supported: 1 00:10:54.185 Keep Alive: Not Supported 00:10:54.185 00:10:54.185 NVM Command Set Attributes 00:10:54.185 ========================== 00:10:54.185 Submission Queue Entry Size 00:10:54.185 Max: 64 00:10:54.185 Min: 64 00:10:54.185 Completion Queue Entry Size 00:10:54.185 Max: 16 00:10:54.185 Min: 16 00:10:54.185 Number of Namespaces: 256 00:10:54.185 Compare Command: Supported 00:10:54.185 Write Uncorrectable Command: Not Supported 00:10:54.185 Dataset Management Command: Supported 00:10:54.185 Write Zeroes Command: Supported 00:10:54.185 Set Features Save Field: Supported 00:10:54.185 Reservations: Not Supported 00:10:54.185 Timestamp: Supported 00:10:54.185 Copy: Supported 00:10:54.185 Volatile Write Cache: Present 00:10:54.185 Atomic Write Unit (Normal): 1 00:10:54.185 Atomic Write Unit (PFail): 1 00:10:54.185 Atomic Compare & Write Unit: 1 00:10:54.185 Fused Compare & Write: Not Supported 00:10:54.185 Scatter-Gather List 00:10:54.185 SGL Command Set: Supported 00:10:54.185 SGL Keyed: Not Supported 00:10:54.185 SGL Bit Bucket Descriptor: Not Supported 00:10:54.185 SGL Metadata Pointer: Not Supported 00:10:54.185 Oversized SGL: Not Supported 00:10:54.185 SGL Metadata Address: Not Supported 00:10:54.186 SGL Offset: Not Supported 00:10:54.186 Transport SGL Data Block: Not Supported 00:10:54.186 Replay Protected Memory Block: Not Supported 00:10:54.186 00:10:54.186 Firmware Slot Information 00:10:54.186 ========================= 00:10:54.186 Active slot: 1 00:10:54.186 Slot 1 Firmware Revision: 1.0 00:10:54.186 00:10:54.186 00:10:54.186 Commands Supported and Effects 00:10:54.186 ============================== 00:10:54.186 Admin Commands 00:10:54.186 -------------- 00:10:54.186 Delete I/O Submission Queue (00h): Supported 00:10:54.186 Create I/O Submission Queue (01h): Supported 00:10:54.186 Get Log Page (02h): Supported 00:10:54.186 Delete I/O Completion Queue (04h): Supported 00:10:54.186 Create I/O Completion Queue (05h): Supported 00:10:54.186 Identify (06h): Supported 00:10:54.186 Abort (08h): Supported 00:10:54.186 Set Features (09h): Supported 00:10:54.186 Get Features (0Ah): Supported 00:10:54.186 Asynchronous Event Request (0Ch): Supported 00:10:54.186 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:54.186 Directive Send (19h): Supported 00:10:54.186 Directive Receive (1Ah): Supported 00:10:54.186 Virtualization Management (1Ch): Supported 00:10:54.186 Doorbell Buffer Config (7Ch): Supported 00:10:54.186 Format NVM (80h): Supported LBA-Change 00:10:54.186 I/O Commands 00:10:54.186 ------------ 00:10:54.186 Flush (00h): Supported LBA-Change 00:10:54.186 Write (01h): Supported LBA-Change 00:10:54.186 Read (02h): Supported 00:10:54.186 Compare (05h): Supported 00:10:54.186 Write Zeroes (08h): Supported LBA-Change 00:10:54.186 Dataset Management (09h): Supported LBA-Change 00:10:54.186 Unknown (0Ch): Supported 00:10:54.186 Unknown (12h): Supported 00:10:54.186 Copy (19h): Supported LBA-Change 00:10:54.186 Unknown (1Dh): Supported LBA-Change 00:10:54.186 00:10:54.186 Error Log 00:10:54.186 ========= 00:10:54.186 00:10:54.186 Arbitration 00:10:54.186 =========== 00:10:54.186 Arbitration Burst: no limit 00:10:54.186 00:10:54.186 Power Management 00:10:54.186 ================ 00:10:54.186 Number of Power States: 1 00:10:54.186 Current Power State: Power State #0 00:10:54.186 Power State #0: 00:10:54.186 Max Power: 25.00 W 00:10:54.186 Non-Operational State: Operational 00:10:54.186 Entry Latency: 16 microseconds 00:10:54.186 Exit Latency: 4 microseconds 00:10:54.186 Relative Read Throughput: 0 00:10:54.186 Relative Read Latency: 0 00:10:54.186 Relative Write Throughput: 0 00:10:54.186 Relative Write Latency: 0 00:10:54.186 Idle Power: Not Reported 00:10:54.186 Active Power: Not Reported 00:10:54.186 Non-Operational Permissive Mode: Not Supported 00:10:54.186 00:10:54.186 Health Information 00:10:54.186 ================== 00:10:54.186 Critical Warnings: 00:10:54.186 Available Spare Space: OK 00:10:54.186 Temperature: OK 00:10:54.186 Device Reliability: OK 00:10:54.186 Read Only: No 00:10:54.186 Volatile Memory Backup: OK 00:10:54.186 Current Temperature: 323 Kelvin (50 Celsius) 00:10:54.186 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:54.186 Available Spare: 0% 00:10:54.186 Available Spare Threshold: 0% 00:10:54.186 Life Percentage Used: 0% 00:10:54.186 Data Units Read: 683 00:10:54.186 Data Units Written: 574 00:10:54.186 Host Read Commands: 32033 00:10:54.186 Host Write Commands: 31071 00:10:54.186 Controller Busy Time: 0 minutes 00:10:54.186 Power Cycles: 0 00:10:54.186 Power On Hours: 0 hours 00:10:54.186 Unsafe Shutdowns: 0 00:10:54.186 Unrecoverable Media Errors: 0 00:10:54.186 Lifetime Error Log Entries: 0 00:10:54.186 Warning Temperature Time: 0 minutes 00:10:54.186 Critical Temperature Time: 0 minutes 00:10:54.186 00:10:54.186 Number of Queues 00:10:54.186 ================ 00:10:54.186 Number of I/O Submission Queues: 64 00:10:54.186 Number of I/O Completion Queues: 64 00:10:54.186 00:10:54.186 ZNS Specific Controller Data 00:10:54.186 ============================ 00:10:54.186 Zone Append Size Limit: 0 00:10:54.186 00:10:54.186 00:10:54.186 Active Namespaces 00:10:54.186 ================= 00:10:54.186 Namespace ID:1 00:10:54.186 Error Recovery Timeout: Unlimited 00:10:54.186 Command Set Identifier: NVM (00h) 00:10:54.186 Deallocate: Supported 00:10:54.186 Deallocated/Unwritten Error: Supported 00:10:54.186 Deallocated Read Value: All 0x00 00:10:54.186 Deallocate in Write Zeroes: Not Supported 00:10:54.186 Deallocated Guard Field: 0xFFFF 00:10:54.186 Flush: Supported 00:10:54.186 Reservation: Not Supported 00:10:54.186 Metadata Transferred as: Separate Metadata Buffer 00:10:54.186 Namespace Sharing Capabilities: Private 00:10:54.186 Size (in LBAs): 1548666 (5GiB) 00:10:54.186 Capacity (in LBAs): 1548666 (5GiB) 00:10:54.186 Utilization (in LBAs): 1548666 (5GiB) 00:10:54.186 Thin Provisioning: Not Supported 00:10:54.186 Per-NS Atomic Units: No 00:10:54.186 Maximum Single Source Range Length: 128 00:10:54.186 Maximum Copy Length: 128 00:10:54.186 Maximum Source Range Count: 128 00:10:54.186 NGUID/EUI64 Never Reused: No 00:10:54.186 Namespace Write Protected: No 00:10:54.186 Number of LBA Formats: 8 00:10:54.186 Current LBA Format: LBA Format #07 00:10:54.186 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.186 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.186 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.186 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.186 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.186 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.186 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.186 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.186 00:10:54.186 NVM Specific Namespace Data 00:10:54.186 =========================== 00:10:54.186 Logical Block Storage Tag Mask: 0 00:10:54.186 Protection Information Capabilities: 00:10:54.186 16b Guard Protection Information Storage Tag Support: No 00:10:54.186 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.186 Storage Tag Check Read Support: No 00:10:54.186 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.186 11:35:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:54.186 11:35:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:54.446 ===================================================== 00:10:54.446 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:54.446 ===================================================== 00:10:54.446 Controller Capabilities/Features 00:10:54.446 ================================ 00:10:54.446 Vendor ID: 1b36 00:10:54.446 Subsystem Vendor ID: 1af4 00:10:54.446 Serial Number: 12341 00:10:54.446 Model Number: QEMU NVMe Ctrl 00:10:54.446 Firmware Version: 8.0.0 00:10:54.446 Recommended Arb Burst: 6 00:10:54.446 IEEE OUI Identifier: 00 54 52 00:10:54.446 Multi-path I/O 00:10:54.446 May have multiple subsystem ports: No 00:10:54.446 May have multiple controllers: No 00:10:54.446 Associated with SR-IOV VF: No 00:10:54.446 Max Data Transfer Size: 524288 00:10:54.446 Max Number of Namespaces: 256 00:10:54.446 Max Number of I/O Queues: 64 00:10:54.446 NVMe Specification Version (VS): 1.4 00:10:54.446 NVMe Specification Version (Identify): 1.4 00:10:54.446 Maximum Queue Entries: 2048 00:10:54.446 Contiguous Queues Required: Yes 00:10:54.446 Arbitration Mechanisms Supported 00:10:54.446 Weighted Round Robin: Not Supported 00:10:54.446 Vendor Specific: Not Supported 00:10:54.446 Reset Timeout: 7500 ms 00:10:54.446 Doorbell Stride: 4 bytes 00:10:54.446 NVM Subsystem Reset: Not Supported 00:10:54.446 Command Sets Supported 00:10:54.446 NVM Command Set: Supported 00:10:54.446 Boot Partition: Not Supported 00:10:54.446 Memory Page Size Minimum: 4096 bytes 00:10:54.446 Memory Page Size Maximum: 65536 bytes 00:10:54.446 Persistent Memory Region: Not Supported 00:10:54.446 Optional Asynchronous Events Supported 00:10:54.446 Namespace Attribute Notices: Supported 00:10:54.446 Firmware Activation Notices: Not Supported 00:10:54.446 ANA Change Notices: Not Supported 00:10:54.446 PLE Aggregate Log Change Notices: Not Supported 00:10:54.446 LBA Status Info Alert Notices: Not Supported 00:10:54.446 EGE Aggregate Log Change Notices: Not Supported 00:10:54.446 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.446 Zone Descriptor Change Notices: Not Supported 00:10:54.446 Discovery Log Change Notices: Not Supported 00:10:54.446 Controller Attributes 00:10:54.446 128-bit Host Identifier: Not Supported 00:10:54.446 Non-Operational Permissive Mode: Not Supported 00:10:54.446 NVM Sets: Not Supported 00:10:54.446 Read Recovery Levels: Not Supported 00:10:54.446 Endurance Groups: Not Supported 00:10:54.446 Predictable Latency Mode: Not Supported 00:10:54.446 Traffic Based Keep ALive: Not Supported 00:10:54.446 Namespace Granularity: Not Supported 00:10:54.446 SQ Associations: Not Supported 00:10:54.446 UUID List: Not Supported 00:10:54.446 Multi-Domain Subsystem: Not Supported 00:10:54.446 Fixed Capacity Management: Not Supported 00:10:54.446 Variable Capacity Management: Not Supported 00:10:54.446 Delete Endurance Group: Not Supported 00:10:54.446 Delete NVM Set: Not Supported 00:10:54.446 Extended LBA Formats Supported: Supported 00:10:54.446 Flexible Data Placement Supported: Not Supported 00:10:54.446 00:10:54.446 Controller Memory Buffer Support 00:10:54.446 ================================ 00:10:54.446 Supported: No 00:10:54.446 00:10:54.446 Persistent Memory Region Support 00:10:54.446 ================================ 00:10:54.446 Supported: No 00:10:54.446 00:10:54.446 Admin Command Set Attributes 00:10:54.446 ============================ 00:10:54.446 Security Send/Receive: Not Supported 00:10:54.446 Format NVM: Supported 00:10:54.446 Firmware Activate/Download: Not Supported 00:10:54.446 Namespace Management: Supported 00:10:54.446 Device Self-Test: Not Supported 00:10:54.446 Directives: Supported 00:10:54.446 NVMe-MI: Not Supported 00:10:54.446 Virtualization Management: Not Supported 00:10:54.446 Doorbell Buffer Config: Supported 00:10:54.446 Get LBA Status Capability: Not Supported 00:10:54.446 Command & Feature Lockdown Capability: Not Supported 00:10:54.446 Abort Command Limit: 4 00:10:54.446 Async Event Request Limit: 4 00:10:54.446 Number of Firmware Slots: N/A 00:10:54.446 Firmware Slot 1 Read-Only: N/A 00:10:54.446 Firmware Activation Without Reset: N/A 00:10:54.446 Multiple Update Detection Support: N/A 00:10:54.446 Firmware Update Granularity: No Information Provided 00:10:54.446 Per-Namespace SMART Log: Yes 00:10:54.446 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.446 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:54.446 Command Effects Log Page: Supported 00:10:54.446 Get Log Page Extended Data: Supported 00:10:54.446 Telemetry Log Pages: Not Supported 00:10:54.446 Persistent Event Log Pages: Not Supported 00:10:54.446 Supported Log Pages Log Page: May Support 00:10:54.446 Commands Supported & Effects Log Page: Not Supported 00:10:54.446 Feature Identifiers & Effects Log Page:May Support 00:10:54.446 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.446 Data Area 4 for Telemetry Log: Not Supported 00:10:54.446 Error Log Page Entries Supported: 1 00:10:54.446 Keep Alive: Not Supported 00:10:54.446 00:10:54.446 NVM Command Set Attributes 00:10:54.446 ========================== 00:10:54.446 Submission Queue Entry Size 00:10:54.446 Max: 64 00:10:54.446 Min: 64 00:10:54.446 Completion Queue Entry Size 00:10:54.446 Max: 16 00:10:54.446 Min: 16 00:10:54.446 Number of Namespaces: 256 00:10:54.446 Compare Command: Supported 00:10:54.446 Write Uncorrectable Command: Not Supported 00:10:54.446 Dataset Management Command: Supported 00:10:54.446 Write Zeroes Command: Supported 00:10:54.446 Set Features Save Field: Supported 00:10:54.446 Reservations: Not Supported 00:10:54.446 Timestamp: Supported 00:10:54.446 Copy: Supported 00:10:54.446 Volatile Write Cache: Present 00:10:54.446 Atomic Write Unit (Normal): 1 00:10:54.446 Atomic Write Unit (PFail): 1 00:10:54.446 Atomic Compare & Write Unit: 1 00:10:54.446 Fused Compare & Write: Not Supported 00:10:54.446 Scatter-Gather List 00:10:54.446 SGL Command Set: Supported 00:10:54.446 SGL Keyed: Not Supported 00:10:54.446 SGL Bit Bucket Descriptor: Not Supported 00:10:54.446 SGL Metadata Pointer: Not Supported 00:10:54.446 Oversized SGL: Not Supported 00:10:54.446 SGL Metadata Address: Not Supported 00:10:54.446 SGL Offset: Not Supported 00:10:54.446 Transport SGL Data Block: Not Supported 00:10:54.446 Replay Protected Memory Block: Not Supported 00:10:54.446 00:10:54.446 Firmware Slot Information 00:10:54.446 ========================= 00:10:54.446 Active slot: 1 00:10:54.446 Slot 1 Firmware Revision: 1.0 00:10:54.446 00:10:54.446 00:10:54.446 Commands Supported and Effects 00:10:54.446 ============================== 00:10:54.446 Admin Commands 00:10:54.446 -------------- 00:10:54.446 Delete I/O Submission Queue (00h): Supported 00:10:54.446 Create I/O Submission Queue (01h): Supported 00:10:54.446 Get Log Page (02h): Supported 00:10:54.446 Delete I/O Completion Queue (04h): Supported 00:10:54.446 Create I/O Completion Queue (05h): Supported 00:10:54.446 Identify (06h): Supported 00:10:54.446 Abort (08h): Supported 00:10:54.446 Set Features (09h): Supported 00:10:54.446 Get Features (0Ah): Supported 00:10:54.446 Asynchronous Event Request (0Ch): Supported 00:10:54.446 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:54.446 Directive Send (19h): Supported 00:10:54.446 Directive Receive (1Ah): Supported 00:10:54.446 Virtualization Management (1Ch): Supported 00:10:54.446 Doorbell Buffer Config (7Ch): Supported 00:10:54.446 Format NVM (80h): Supported LBA-Change 00:10:54.446 I/O Commands 00:10:54.446 ------------ 00:10:54.446 Flush (00h): Supported LBA-Change 00:10:54.446 Write (01h): Supported LBA-Change 00:10:54.446 Read (02h): Supported 00:10:54.446 Compare (05h): Supported 00:10:54.446 Write Zeroes (08h): Supported LBA-Change 00:10:54.446 Dataset Management (09h): Supported LBA-Change 00:10:54.446 Unknown (0Ch): Supported 00:10:54.446 Unknown (12h): Supported 00:10:54.446 Copy (19h): Supported LBA-Change 00:10:54.446 Unknown (1Dh): Supported LBA-Change 00:10:54.446 00:10:54.446 Error Log 00:10:54.446 ========= 00:10:54.446 00:10:54.446 Arbitration 00:10:54.446 =========== 00:10:54.446 Arbitration Burst: no limit 00:10:54.446 00:10:54.446 Power Management 00:10:54.446 ================ 00:10:54.446 Number of Power States: 1 00:10:54.446 Current Power State: Power State #0 00:10:54.447 Power State #0: 00:10:54.447 Max Power: 25.00 W 00:10:54.447 Non-Operational State: Operational 00:10:54.447 Entry Latency: 16 microseconds 00:10:54.447 Exit Latency: 4 microseconds 00:10:54.447 Relative Read Throughput: 0 00:10:54.447 Relative Read Latency: 0 00:10:54.447 Relative Write Throughput: 0 00:10:54.447 Relative Write Latency: 0 00:10:54.447 Idle Power: Not Reported 00:10:54.447 Active Power: Not Reported 00:10:54.447 Non-Operational Permissive Mode: Not Supported 00:10:54.447 00:10:54.447 Health Information 00:10:54.447 ================== 00:10:54.447 Critical Warnings: 00:10:54.447 Available Spare Space: OK 00:10:54.447 Temperature: OK 00:10:54.447 Device Reliability: OK 00:10:54.447 Read Only: No 00:10:54.447 Volatile Memory Backup: OK 00:10:54.447 Current Temperature: 323 Kelvin (50 Celsius) 00:10:54.447 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:54.447 Available Spare: 0% 00:10:54.447 Available Spare Threshold: 0% 00:10:54.447 Life Percentage Used: 0% 00:10:54.447 Data Units Read: 1069 00:10:54.447 Data Units Written: 864 00:10:54.447 Host Read Commands: 47613 00:10:54.447 Host Write Commands: 44857 00:10:54.447 Controller Busy Time: 0 minutes 00:10:54.447 Power Cycles: 0 00:10:54.447 Power On Hours: 0 hours 00:10:54.447 Unsafe Shutdowns: 0 00:10:54.447 Unrecoverable Media Errors: 0 00:10:54.447 Lifetime Error Log Entries: 0 00:10:54.447 Warning Temperature Time: 0 minutes 00:10:54.447 Critical Temperature Time: 0 minutes 00:10:54.447 00:10:54.447 Number of Queues 00:10:54.447 ================ 00:10:54.447 Number of I/O Submission Queues: 64 00:10:54.447 Number of I/O Completion Queues: 64 00:10:54.447 00:10:54.447 ZNS Specific Controller Data 00:10:54.447 ============================ 00:10:54.447 Zone Append Size Limit: 0 00:10:54.447 00:10:54.447 00:10:54.447 Active Namespaces 00:10:54.447 ================= 00:10:54.447 Namespace ID:1 00:10:54.447 Error Recovery Timeout: Unlimited 00:10:54.447 Command Set Identifier: NVM (00h) 00:10:54.447 Deallocate: Supported 00:10:54.447 Deallocated/Unwritten Error: Supported 00:10:54.447 Deallocated Read Value: All 0x00 00:10:54.447 Deallocate in Write Zeroes: Not Supported 00:10:54.447 Deallocated Guard Field: 0xFFFF 00:10:54.447 Flush: Supported 00:10:54.447 Reservation: Not Supported 00:10:54.447 Namespace Sharing Capabilities: Private 00:10:54.447 Size (in LBAs): 1310720 (5GiB) 00:10:54.447 Capacity (in LBAs): 1310720 (5GiB) 00:10:54.447 Utilization (in LBAs): 1310720 (5GiB) 00:10:54.447 Thin Provisioning: Not Supported 00:10:54.447 Per-NS Atomic Units: No 00:10:54.447 Maximum Single Source Range Length: 128 00:10:54.447 Maximum Copy Length: 128 00:10:54.447 Maximum Source Range Count: 128 00:10:54.447 NGUID/EUI64 Never Reused: No 00:10:54.447 Namespace Write Protected: No 00:10:54.447 Number of LBA Formats: 8 00:10:54.447 Current LBA Format: LBA Format #04 00:10:54.447 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.447 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.447 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.447 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.447 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.447 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.447 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.447 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.447 00:10:54.447 NVM Specific Namespace Data 00:10:54.447 =========================== 00:10:54.447 Logical Block Storage Tag Mask: 0 00:10:54.447 Protection Information Capabilities: 00:10:54.447 16b Guard Protection Information Storage Tag Support: No 00:10:54.447 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.447 Storage Tag Check Read Support: No 00:10:54.447 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.447 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.447 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.447 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.447 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.447 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.447 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.447 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.447 11:35:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:54.447 11:35:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:54.705 ===================================================== 00:10:54.705 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:54.705 ===================================================== 00:10:54.705 Controller Capabilities/Features 00:10:54.705 ================================ 00:10:54.705 Vendor ID: 1b36 00:10:54.705 Subsystem Vendor ID: 1af4 00:10:54.705 Serial Number: 12342 00:10:54.705 Model Number: QEMU NVMe Ctrl 00:10:54.705 Firmware Version: 8.0.0 00:10:54.705 Recommended Arb Burst: 6 00:10:54.705 IEEE OUI Identifier: 00 54 52 00:10:54.705 Multi-path I/O 00:10:54.705 May have multiple subsystem ports: No 00:10:54.705 May have multiple controllers: No 00:10:54.705 Associated with SR-IOV VF: No 00:10:54.705 Max Data Transfer Size: 524288 00:10:54.705 Max Number of Namespaces: 256 00:10:54.705 Max Number of I/O Queues: 64 00:10:54.705 NVMe Specification Version (VS): 1.4 00:10:54.705 NVMe Specification Version (Identify): 1.4 00:10:54.705 Maximum Queue Entries: 2048 00:10:54.705 Contiguous Queues Required: Yes 00:10:54.705 Arbitration Mechanisms Supported 00:10:54.705 Weighted Round Robin: Not Supported 00:10:54.705 Vendor Specific: Not Supported 00:10:54.705 Reset Timeout: 7500 ms 00:10:54.705 Doorbell Stride: 4 bytes 00:10:54.705 NVM Subsystem Reset: Not Supported 00:10:54.705 Command Sets Supported 00:10:54.705 NVM Command Set: Supported 00:10:54.705 Boot Partition: Not Supported 00:10:54.705 Memory Page Size Minimum: 4096 bytes 00:10:54.705 Memory Page Size Maximum: 65536 bytes 00:10:54.705 Persistent Memory Region: Not Supported 00:10:54.705 Optional Asynchronous Events Supported 00:10:54.705 Namespace Attribute Notices: Supported 00:10:54.705 Firmware Activation Notices: Not Supported 00:10:54.705 ANA Change Notices: Not Supported 00:10:54.705 PLE Aggregate Log Change Notices: Not Supported 00:10:54.705 LBA Status Info Alert Notices: Not Supported 00:10:54.705 EGE Aggregate Log Change Notices: Not Supported 00:10:54.705 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.705 Zone Descriptor Change Notices: Not Supported 00:10:54.705 Discovery Log Change Notices: Not Supported 00:10:54.705 Controller Attributes 00:10:54.705 128-bit Host Identifier: Not Supported 00:10:54.705 Non-Operational Permissive Mode: Not Supported 00:10:54.705 NVM Sets: Not Supported 00:10:54.705 Read Recovery Levels: Not Supported 00:10:54.705 Endurance Groups: Not Supported 00:10:54.705 Predictable Latency Mode: Not Supported 00:10:54.705 Traffic Based Keep ALive: Not Supported 00:10:54.705 Namespace Granularity: Not Supported 00:10:54.705 SQ Associations: Not Supported 00:10:54.705 UUID List: Not Supported 00:10:54.705 Multi-Domain Subsystem: Not Supported 00:10:54.705 Fixed Capacity Management: Not Supported 00:10:54.705 Variable Capacity Management: Not Supported 00:10:54.705 Delete Endurance Group: Not Supported 00:10:54.705 Delete NVM Set: Not Supported 00:10:54.705 Extended LBA Formats Supported: Supported 00:10:54.705 Flexible Data Placement Supported: Not Supported 00:10:54.705 00:10:54.705 Controller Memory Buffer Support 00:10:54.705 ================================ 00:10:54.705 Supported: No 00:10:54.705 00:10:54.705 Persistent Memory Region Support 00:10:54.705 ================================ 00:10:54.705 Supported: No 00:10:54.705 00:10:54.705 Admin Command Set Attributes 00:10:54.705 ============================ 00:10:54.705 Security Send/Receive: Not Supported 00:10:54.705 Format NVM: Supported 00:10:54.705 Firmware Activate/Download: Not Supported 00:10:54.705 Namespace Management: Supported 00:10:54.705 Device Self-Test: Not Supported 00:10:54.705 Directives: Supported 00:10:54.705 NVMe-MI: Not Supported 00:10:54.705 Virtualization Management: Not Supported 00:10:54.705 Doorbell Buffer Config: Supported 00:10:54.705 Get LBA Status Capability: Not Supported 00:10:54.705 Command & Feature Lockdown Capability: Not Supported 00:10:54.705 Abort Command Limit: 4 00:10:54.705 Async Event Request Limit: 4 00:10:54.705 Number of Firmware Slots: N/A 00:10:54.705 Firmware Slot 1 Read-Only: N/A 00:10:54.705 Firmware Activation Without Reset: N/A 00:10:54.705 Multiple Update Detection Support: N/A 00:10:54.705 Firmware Update Granularity: No Information Provided 00:10:54.705 Per-Namespace SMART Log: Yes 00:10:54.705 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.705 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:54.705 Command Effects Log Page: Supported 00:10:54.705 Get Log Page Extended Data: Supported 00:10:54.705 Telemetry Log Pages: Not Supported 00:10:54.705 Persistent Event Log Pages: Not Supported 00:10:54.705 Supported Log Pages Log Page: May Support 00:10:54.705 Commands Supported & Effects Log Page: Not Supported 00:10:54.705 Feature Identifiers & Effects Log Page:May Support 00:10:54.705 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.705 Data Area 4 for Telemetry Log: Not Supported 00:10:54.705 Error Log Page Entries Supported: 1 00:10:54.705 Keep Alive: Not Supported 00:10:54.705 00:10:54.705 NVM Command Set Attributes 00:10:54.705 ========================== 00:10:54.705 Submission Queue Entry Size 00:10:54.705 Max: 64 00:10:54.705 Min: 64 00:10:54.705 Completion Queue Entry Size 00:10:54.705 Max: 16 00:10:54.705 Min: 16 00:10:54.705 Number of Namespaces: 256 00:10:54.705 Compare Command: Supported 00:10:54.705 Write Uncorrectable Command: Not Supported 00:10:54.705 Dataset Management Command: Supported 00:10:54.705 Write Zeroes Command: Supported 00:10:54.705 Set Features Save Field: Supported 00:10:54.705 Reservations: Not Supported 00:10:54.705 Timestamp: Supported 00:10:54.705 Copy: Supported 00:10:54.705 Volatile Write Cache: Present 00:10:54.705 Atomic Write Unit (Normal): 1 00:10:54.705 Atomic Write Unit (PFail): 1 00:10:54.705 Atomic Compare & Write Unit: 1 00:10:54.705 Fused Compare & Write: Not Supported 00:10:54.705 Scatter-Gather List 00:10:54.705 SGL Command Set: Supported 00:10:54.705 SGL Keyed: Not Supported 00:10:54.705 SGL Bit Bucket Descriptor: Not Supported 00:10:54.705 SGL Metadata Pointer: Not Supported 00:10:54.705 Oversized SGL: Not Supported 00:10:54.705 SGL Metadata Address: Not Supported 00:10:54.705 SGL Offset: Not Supported 00:10:54.705 Transport SGL Data Block: Not Supported 00:10:54.705 Replay Protected Memory Block: Not Supported 00:10:54.705 00:10:54.705 Firmware Slot Information 00:10:54.705 ========================= 00:10:54.705 Active slot: 1 00:10:54.705 Slot 1 Firmware Revision: 1.0 00:10:54.705 00:10:54.705 00:10:54.705 Commands Supported and Effects 00:10:54.705 ============================== 00:10:54.705 Admin Commands 00:10:54.705 -------------- 00:10:54.705 Delete I/O Submission Queue (00h): Supported 00:10:54.705 Create I/O Submission Queue (01h): Supported 00:10:54.705 Get Log Page (02h): Supported 00:10:54.705 Delete I/O Completion Queue (04h): Supported 00:10:54.705 Create I/O Completion Queue (05h): Supported 00:10:54.705 Identify (06h): Supported 00:10:54.705 Abort (08h): Supported 00:10:54.705 Set Features (09h): Supported 00:10:54.705 Get Features (0Ah): Supported 00:10:54.705 Asynchronous Event Request (0Ch): Supported 00:10:54.705 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:54.705 Directive Send (19h): Supported 00:10:54.705 Directive Receive (1Ah): Supported 00:10:54.705 Virtualization Management (1Ch): Supported 00:10:54.705 Doorbell Buffer Config (7Ch): Supported 00:10:54.705 Format NVM (80h): Supported LBA-Change 00:10:54.705 I/O Commands 00:10:54.705 ------------ 00:10:54.705 Flush (00h): Supported LBA-Change 00:10:54.705 Write (01h): Supported LBA-Change 00:10:54.705 Read (02h): Supported 00:10:54.705 Compare (05h): Supported 00:10:54.705 Write Zeroes (08h): Supported LBA-Change 00:10:54.705 Dataset Management (09h): Supported LBA-Change 00:10:54.705 Unknown (0Ch): Supported 00:10:54.705 Unknown (12h): Supported 00:10:54.705 Copy (19h): Supported LBA-Change 00:10:54.705 Unknown (1Dh): Supported LBA-Change 00:10:54.705 00:10:54.705 Error Log 00:10:54.705 ========= 00:10:54.705 00:10:54.705 Arbitration 00:10:54.705 =========== 00:10:54.705 Arbitration Burst: no limit 00:10:54.705 00:10:54.705 Power Management 00:10:54.705 ================ 00:10:54.705 Number of Power States: 1 00:10:54.705 Current Power State: Power State #0 00:10:54.705 Power State #0: 00:10:54.705 Max Power: 25.00 W 00:10:54.705 Non-Operational State: Operational 00:10:54.705 Entry Latency: 16 microseconds 00:10:54.705 Exit Latency: 4 microseconds 00:10:54.705 Relative Read Throughput: 0 00:10:54.705 Relative Read Latency: 0 00:10:54.705 Relative Write Throughput: 0 00:10:54.705 Relative Write Latency: 0 00:10:54.705 Idle Power: Not Reported 00:10:54.705 Active Power: Not Reported 00:10:54.705 Non-Operational Permissive Mode: Not Supported 00:10:54.705 00:10:54.705 Health Information 00:10:54.705 ================== 00:10:54.705 Critical Warnings: 00:10:54.705 Available Spare Space: OK 00:10:54.705 Temperature: OK 00:10:54.705 Device Reliability: OK 00:10:54.705 Read Only: No 00:10:54.705 Volatile Memory Backup: OK 00:10:54.705 Current Temperature: 323 Kelvin (50 Celsius) 00:10:54.705 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:54.705 Available Spare: 0% 00:10:54.705 Available Spare Threshold: 0% 00:10:54.705 Life Percentage Used: 0% 00:10:54.705 Data Units Read: 2174 00:10:54.705 Data Units Written: 1854 00:10:54.705 Host Read Commands: 97848 00:10:54.705 Host Write Commands: 93618 00:10:54.705 Controller Busy Time: 0 minutes 00:10:54.705 Power Cycles: 0 00:10:54.705 Power On Hours: 0 hours 00:10:54.705 Unsafe Shutdowns: 0 00:10:54.705 Unrecoverable Media Errors: 0 00:10:54.705 Lifetime Error Log Entries: 0 00:10:54.705 Warning Temperature Time: 0 minutes 00:10:54.705 Critical Temperature Time: 0 minutes 00:10:54.705 00:10:54.705 Number of Queues 00:10:54.705 ================ 00:10:54.705 Number of I/O Submission Queues: 64 00:10:54.705 Number of I/O Completion Queues: 64 00:10:54.705 00:10:54.705 ZNS Specific Controller Data 00:10:54.705 ============================ 00:10:54.705 Zone Append Size Limit: 0 00:10:54.705 00:10:54.705 00:10:54.705 Active Namespaces 00:10:54.705 ================= 00:10:54.705 Namespace ID:1 00:10:54.705 Error Recovery Timeout: Unlimited 00:10:54.705 Command Set Identifier: NVM (00h) 00:10:54.705 Deallocate: Supported 00:10:54.705 Deallocated/Unwritten Error: Supported 00:10:54.705 Deallocated Read Value: All 0x00 00:10:54.705 Deallocate in Write Zeroes: Not Supported 00:10:54.705 Deallocated Guard Field: 0xFFFF 00:10:54.705 Flush: Supported 00:10:54.705 Reservation: Not Supported 00:10:54.705 Namespace Sharing Capabilities: Private 00:10:54.705 Size (in LBAs): 1048576 (4GiB) 00:10:54.705 Capacity (in LBAs): 1048576 (4GiB) 00:10:54.705 Utilization (in LBAs): 1048576 (4GiB) 00:10:54.705 Thin Provisioning: Not Supported 00:10:54.705 Per-NS Atomic Units: No 00:10:54.705 Maximum Single Source Range Length: 128 00:10:54.705 Maximum Copy Length: 128 00:10:54.705 Maximum Source Range Count: 128 00:10:54.705 NGUID/EUI64 Never Reused: No 00:10:54.705 Namespace Write Protected: No 00:10:54.705 Number of LBA Formats: 8 00:10:54.705 Current LBA Format: LBA Format #04 00:10:54.705 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.705 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.705 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.705 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.705 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.705 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.705 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.705 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.705 00:10:54.705 NVM Specific Namespace Data 00:10:54.705 =========================== 00:10:54.705 Logical Block Storage Tag Mask: 0 00:10:54.705 Protection Information Capabilities: 00:10:54.705 16b Guard Protection Information Storage Tag Support: No 00:10:54.705 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.705 Storage Tag Check Read Support: No 00:10:54.705 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.705 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.705 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.705 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.705 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.705 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.705 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.705 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.705 Namespace ID:2 00:10:54.705 Error Recovery Timeout: Unlimited 00:10:54.705 Command Set Identifier: NVM (00h) 00:10:54.705 Deallocate: Supported 00:10:54.705 Deallocated/Unwritten Error: Supported 00:10:54.705 Deallocated Read Value: All 0x00 00:10:54.705 Deallocate in Write Zeroes: Not Supported 00:10:54.705 Deallocated Guard Field: 0xFFFF 00:10:54.705 Flush: Supported 00:10:54.705 Reservation: Not Supported 00:10:54.705 Namespace Sharing Capabilities: Private 00:10:54.705 Size (in LBAs): 1048576 (4GiB) 00:10:54.705 Capacity (in LBAs): 1048576 (4GiB) 00:10:54.705 Utilization (in LBAs): 1048576 (4GiB) 00:10:54.705 Thin Provisioning: Not Supported 00:10:54.705 Per-NS Atomic Units: No 00:10:54.705 Maximum Single Source Range Length: 128 00:10:54.705 Maximum Copy Length: 128 00:10:54.705 Maximum Source Range Count: 128 00:10:54.705 NGUID/EUI64 Never Reused: No 00:10:54.705 Namespace Write Protected: No 00:10:54.705 Number of LBA Formats: 8 00:10:54.705 Current LBA Format: LBA Format #04 00:10:54.705 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.705 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.705 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.705 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.705 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.705 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.705 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.706 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.706 00:10:54.706 NVM Specific Namespace Data 00:10:54.706 =========================== 00:10:54.706 Logical Block Storage Tag Mask: 0 00:10:54.706 Protection Information Capabilities: 00:10:54.706 16b Guard Protection Information Storage Tag Support: No 00:10:54.706 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.706 Storage Tag Check Read Support: No 00:10:54.706 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Namespace ID:3 00:10:54.706 Error Recovery Timeout: Unlimited 00:10:54.706 Command Set Identifier: NVM (00h) 00:10:54.706 Deallocate: Supported 00:10:54.706 Deallocated/Unwritten Error: Supported 00:10:54.706 Deallocated Read Value: All 0x00 00:10:54.706 Deallocate in Write Zeroes: Not Supported 00:10:54.706 Deallocated Guard Field: 0xFFFF 00:10:54.706 Flush: Supported 00:10:54.706 Reservation: Not Supported 00:10:54.706 Namespace Sharing Capabilities: Private 00:10:54.706 Size (in LBAs): 1048576 (4GiB) 00:10:54.706 Capacity (in LBAs): 1048576 (4GiB) 00:10:54.706 Utilization (in LBAs): 1048576 (4GiB) 00:10:54.706 Thin Provisioning: Not Supported 00:10:54.706 Per-NS Atomic Units: No 00:10:54.706 Maximum Single Source Range Length: 128 00:10:54.706 Maximum Copy Length: 128 00:10:54.706 Maximum Source Range Count: 128 00:10:54.706 NGUID/EUI64 Never Reused: No 00:10:54.706 Namespace Write Protected: No 00:10:54.706 Number of LBA Formats: 8 00:10:54.706 Current LBA Format: LBA Format #04 00:10:54.706 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.706 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.706 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.706 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.706 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.706 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.706 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.706 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.706 00:10:54.706 NVM Specific Namespace Data 00:10:54.706 =========================== 00:10:54.706 Logical Block Storage Tag Mask: 0 00:10:54.706 Protection Information Capabilities: 00:10:54.706 16b Guard Protection Information Storage Tag Support: No 00:10:54.706 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.706 Storage Tag Check Read Support: No 00:10:54.706 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.706 11:35:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:54.706 11:35:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:54.964 ===================================================== 00:10:54.964 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:54.964 ===================================================== 00:10:54.964 Controller Capabilities/Features 00:10:54.964 ================================ 00:10:54.964 Vendor ID: 1b36 00:10:54.964 Subsystem Vendor ID: 1af4 00:10:54.964 Serial Number: 12343 00:10:54.964 Model Number: QEMU NVMe Ctrl 00:10:54.964 Firmware Version: 8.0.0 00:10:54.964 Recommended Arb Burst: 6 00:10:54.964 IEEE OUI Identifier: 00 54 52 00:10:54.964 Multi-path I/O 00:10:54.964 May have multiple subsystem ports: No 00:10:54.964 May have multiple controllers: Yes 00:10:54.964 Associated with SR-IOV VF: No 00:10:54.964 Max Data Transfer Size: 524288 00:10:54.964 Max Number of Namespaces: 256 00:10:54.964 Max Number of I/O Queues: 64 00:10:54.964 NVMe Specification Version (VS): 1.4 00:10:54.964 NVMe Specification Version (Identify): 1.4 00:10:54.964 Maximum Queue Entries: 2048 00:10:54.964 Contiguous Queues Required: Yes 00:10:54.964 Arbitration Mechanisms Supported 00:10:54.964 Weighted Round Robin: Not Supported 00:10:54.964 Vendor Specific: Not Supported 00:10:54.964 Reset Timeout: 7500 ms 00:10:54.964 Doorbell Stride: 4 bytes 00:10:54.964 NVM Subsystem Reset: Not Supported 00:10:54.964 Command Sets Supported 00:10:54.964 NVM Command Set: Supported 00:10:54.964 Boot Partition: Not Supported 00:10:54.964 Memory Page Size Minimum: 4096 bytes 00:10:54.964 Memory Page Size Maximum: 65536 bytes 00:10:54.964 Persistent Memory Region: Not Supported 00:10:54.964 Optional Asynchronous Events Supported 00:10:54.964 Namespace Attribute Notices: Supported 00:10:54.964 Firmware Activation Notices: Not Supported 00:10:54.964 ANA Change Notices: Not Supported 00:10:54.964 PLE Aggregate Log Change Notices: Not Supported 00:10:54.964 LBA Status Info Alert Notices: Not Supported 00:10:54.964 EGE Aggregate Log Change Notices: Not Supported 00:10:54.964 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.964 Zone Descriptor Change Notices: Not Supported 00:10:54.964 Discovery Log Change Notices: Not Supported 00:10:54.964 Controller Attributes 00:10:54.964 128-bit Host Identifier: Not Supported 00:10:54.964 Non-Operational Permissive Mode: Not Supported 00:10:54.964 NVM Sets: Not Supported 00:10:54.964 Read Recovery Levels: Not Supported 00:10:54.964 Endurance Groups: Supported 00:10:54.964 Predictable Latency Mode: Not Supported 00:10:54.964 Traffic Based Keep ALive: Not Supported 00:10:54.964 Namespace Granularity: Not Supported 00:10:54.964 SQ Associations: Not Supported 00:10:54.964 UUID List: Not Supported 00:10:54.964 Multi-Domain Subsystem: Not Supported 00:10:54.964 Fixed Capacity Management: Not Supported 00:10:54.964 Variable Capacity Management: Not Supported 00:10:54.964 Delete Endurance Group: Not Supported 00:10:54.964 Delete NVM Set: Not Supported 00:10:54.964 Extended LBA Formats Supported: Supported 00:10:54.964 Flexible Data Placement Supported: Supported 00:10:54.964 00:10:54.964 Controller Memory Buffer Support 00:10:54.964 ================================ 00:10:54.964 Supported: No 00:10:54.964 00:10:54.964 Persistent Memory Region Support 00:10:54.964 ================================ 00:10:54.964 Supported: No 00:10:54.964 00:10:54.964 Admin Command Set Attributes 00:10:54.964 ============================ 00:10:54.964 Security Send/Receive: Not Supported 00:10:54.964 Format NVM: Supported 00:10:54.964 Firmware Activate/Download: Not Supported 00:10:54.964 Namespace Management: Supported 00:10:54.964 Device Self-Test: Not Supported 00:10:54.964 Directives: Supported 00:10:54.964 NVMe-MI: Not Supported 00:10:54.964 Virtualization Management: Not Supported 00:10:54.964 Doorbell Buffer Config: Supported 00:10:54.964 Get LBA Status Capability: Not Supported 00:10:54.964 Command & Feature Lockdown Capability: Not Supported 00:10:54.964 Abort Command Limit: 4 00:10:54.964 Async Event Request Limit: 4 00:10:54.964 Number of Firmware Slots: N/A 00:10:54.964 Firmware Slot 1 Read-Only: N/A 00:10:54.964 Firmware Activation Without Reset: N/A 00:10:54.964 Multiple Update Detection Support: N/A 00:10:54.964 Firmware Update Granularity: No Information Provided 00:10:54.964 Per-Namespace SMART Log: Yes 00:10:54.964 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.964 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:54.964 Command Effects Log Page: Supported 00:10:54.964 Get Log Page Extended Data: Supported 00:10:54.964 Telemetry Log Pages: Not Supported 00:10:54.964 Persistent Event Log Pages: Not Supported 00:10:54.964 Supported Log Pages Log Page: May Support 00:10:54.964 Commands Supported & Effects Log Page: Not Supported 00:10:54.964 Feature Identifiers & Effects Log Page:May Support 00:10:54.964 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.964 Data Area 4 for Telemetry Log: Not Supported 00:10:54.964 Error Log Page Entries Supported: 1 00:10:54.964 Keep Alive: Not Supported 00:10:54.964 00:10:54.964 NVM Command Set Attributes 00:10:54.964 ========================== 00:10:54.964 Submission Queue Entry Size 00:10:54.964 Max: 64 00:10:54.964 Min: 64 00:10:54.964 Completion Queue Entry Size 00:10:54.964 Max: 16 00:10:54.964 Min: 16 00:10:54.964 Number of Namespaces: 256 00:10:54.964 Compare Command: Supported 00:10:54.964 Write Uncorrectable Command: Not Supported 00:10:54.964 Dataset Management Command: Supported 00:10:54.964 Write Zeroes Command: Supported 00:10:54.964 Set Features Save Field: Supported 00:10:54.964 Reservations: Not Supported 00:10:54.964 Timestamp: Supported 00:10:54.964 Copy: Supported 00:10:54.964 Volatile Write Cache: Present 00:10:54.964 Atomic Write Unit (Normal): 1 00:10:54.964 Atomic Write Unit (PFail): 1 00:10:54.964 Atomic Compare & Write Unit: 1 00:10:54.964 Fused Compare & Write: Not Supported 00:10:54.964 Scatter-Gather List 00:10:54.964 SGL Command Set: Supported 00:10:54.964 SGL Keyed: Not Supported 00:10:54.964 SGL Bit Bucket Descriptor: Not Supported 00:10:54.964 SGL Metadata Pointer: Not Supported 00:10:54.964 Oversized SGL: Not Supported 00:10:54.964 SGL Metadata Address: Not Supported 00:10:54.964 SGL Offset: Not Supported 00:10:54.964 Transport SGL Data Block: Not Supported 00:10:54.964 Replay Protected Memory Block: Not Supported 00:10:54.964 00:10:54.964 Firmware Slot Information 00:10:54.964 ========================= 00:10:54.964 Active slot: 1 00:10:54.964 Slot 1 Firmware Revision: 1.0 00:10:54.964 00:10:54.964 00:10:54.964 Commands Supported and Effects 00:10:54.964 ============================== 00:10:54.964 Admin Commands 00:10:54.964 -------------- 00:10:54.964 Delete I/O Submission Queue (00h): Supported 00:10:54.964 Create I/O Submission Queue (01h): Supported 00:10:54.964 Get Log Page (02h): Supported 00:10:54.964 Delete I/O Completion Queue (04h): Supported 00:10:54.964 Create I/O Completion Queue (05h): Supported 00:10:54.964 Identify (06h): Supported 00:10:54.964 Abort (08h): Supported 00:10:54.964 Set Features (09h): Supported 00:10:54.964 Get Features (0Ah): Supported 00:10:54.964 Asynchronous Event Request (0Ch): Supported 00:10:54.964 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:54.964 Directive Send (19h): Supported 00:10:54.964 Directive Receive (1Ah): Supported 00:10:54.964 Virtualization Management (1Ch): Supported 00:10:54.964 Doorbell Buffer Config (7Ch): Supported 00:10:54.964 Format NVM (80h): Supported LBA-Change 00:10:54.964 I/O Commands 00:10:54.964 ------------ 00:10:54.964 Flush (00h): Supported LBA-Change 00:10:54.964 Write (01h): Supported LBA-Change 00:10:54.964 Read (02h): Supported 00:10:54.964 Compare (05h): Supported 00:10:54.964 Write Zeroes (08h): Supported LBA-Change 00:10:54.964 Dataset Management (09h): Supported LBA-Change 00:10:54.964 Unknown (0Ch): Supported 00:10:54.964 Unknown (12h): Supported 00:10:54.964 Copy (19h): Supported LBA-Change 00:10:54.964 Unknown (1Dh): Supported LBA-Change 00:10:54.964 00:10:54.965 Error Log 00:10:54.965 ========= 00:10:54.965 00:10:54.965 Arbitration 00:10:54.965 =========== 00:10:54.965 Arbitration Burst: no limit 00:10:54.965 00:10:54.965 Power Management 00:10:54.965 ================ 00:10:54.965 Number of Power States: 1 00:10:54.965 Current Power State: Power State #0 00:10:54.965 Power State #0: 00:10:54.965 Max Power: 25.00 W 00:10:54.965 Non-Operational State: Operational 00:10:54.965 Entry Latency: 16 microseconds 00:10:54.965 Exit Latency: 4 microseconds 00:10:54.965 Relative Read Throughput: 0 00:10:54.965 Relative Read Latency: 0 00:10:54.965 Relative Write Throughput: 0 00:10:54.965 Relative Write Latency: 0 00:10:54.965 Idle Power: Not Reported 00:10:54.965 Active Power: Not Reported 00:10:54.965 Non-Operational Permissive Mode: Not Supported 00:10:54.965 00:10:54.965 Health Information 00:10:54.965 ================== 00:10:54.965 Critical Warnings: 00:10:54.965 Available Spare Space: OK 00:10:54.965 Temperature: OK 00:10:54.965 Device Reliability: OK 00:10:54.965 Read Only: No 00:10:54.965 Volatile Memory Backup: OK 00:10:54.965 Current Temperature: 323 Kelvin (50 Celsius) 00:10:54.965 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:54.965 Available Spare: 0% 00:10:54.965 Available Spare Threshold: 0% 00:10:54.965 Life Percentage Used: 0% 00:10:54.965 Data Units Read: 777 00:10:54.965 Data Units Written: 670 00:10:54.965 Host Read Commands: 33040 00:10:54.965 Host Write Commands: 31630 00:10:54.965 Controller Busy Time: 0 minutes 00:10:54.965 Power Cycles: 0 00:10:54.965 Power On Hours: 0 hours 00:10:54.965 Unsafe Shutdowns: 0 00:10:54.965 Unrecoverable Media Errors: 0 00:10:54.965 Lifetime Error Log Entries: 0 00:10:54.965 Warning Temperature Time: 0 minutes 00:10:54.965 Critical Temperature Time: 0 minutes 00:10:54.965 00:10:54.965 Number of Queues 00:10:54.965 ================ 00:10:54.965 Number of I/O Submission Queues: 64 00:10:54.965 Number of I/O Completion Queues: 64 00:10:54.965 00:10:54.965 ZNS Specific Controller Data 00:10:54.965 ============================ 00:10:54.965 Zone Append Size Limit: 0 00:10:54.965 00:10:54.965 00:10:54.965 Active Namespaces 00:10:54.965 ================= 00:10:54.965 Namespace ID:1 00:10:54.965 Error Recovery Timeout: Unlimited 00:10:54.965 Command Set Identifier: NVM (00h) 00:10:54.965 Deallocate: Supported 00:10:54.965 Deallocated/Unwritten Error: Supported 00:10:54.965 Deallocated Read Value: All 0x00 00:10:54.965 Deallocate in Write Zeroes: Not Supported 00:10:54.965 Deallocated Guard Field: 0xFFFF 00:10:54.965 Flush: Supported 00:10:54.965 Reservation: Not Supported 00:10:54.965 Namespace Sharing Capabilities: Multiple Controllers 00:10:54.965 Size (in LBAs): 262144 (1GiB) 00:10:54.965 Capacity (in LBAs): 262144 (1GiB) 00:10:54.965 Utilization (in LBAs): 262144 (1GiB) 00:10:54.965 Thin Provisioning: Not Supported 00:10:54.965 Per-NS Atomic Units: No 00:10:54.965 Maximum Single Source Range Length: 128 00:10:54.965 Maximum Copy Length: 128 00:10:54.965 Maximum Source Range Count: 128 00:10:54.965 NGUID/EUI64 Never Reused: No 00:10:54.965 Namespace Write Protected: No 00:10:54.965 Endurance group ID: 1 00:10:54.965 Number of LBA Formats: 8 00:10:54.965 Current LBA Format: LBA Format #04 00:10:54.965 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.965 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.965 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.965 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.965 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.965 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.965 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.965 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.965 00:10:54.965 Get Feature FDP: 00:10:54.965 ================ 00:10:54.965 Enabled: Yes 00:10:54.965 FDP configuration index: 0 00:10:54.965 00:10:54.965 FDP configurations log page 00:10:54.965 =========================== 00:10:54.965 Number of FDP configurations: 1 00:10:54.965 Version: 0 00:10:54.965 Size: 112 00:10:54.965 FDP Configuration Descriptor: 0 00:10:54.965 Descriptor Size: 96 00:10:54.965 Reclaim Group Identifier format: 2 00:10:54.965 FDP Volatile Write Cache: Not Present 00:10:54.965 FDP Configuration: Valid 00:10:54.965 Vendor Specific Size: 0 00:10:54.965 Number of Reclaim Groups: 2 00:10:54.965 Number of Recalim Unit Handles: 8 00:10:54.965 Max Placement Identifiers: 128 00:10:54.965 Number of Namespaces Suppprted: 256 00:10:54.965 Reclaim unit Nominal Size: 6000000 bytes 00:10:54.965 Estimated Reclaim Unit Time Limit: Not Reported 00:10:54.965 RUH Desc #000: RUH Type: Initially Isolated 00:10:54.965 RUH Desc #001: RUH Type: Initially Isolated 00:10:54.965 RUH Desc #002: RUH Type: Initially Isolated 00:10:54.965 RUH Desc #003: RUH Type: Initially Isolated 00:10:54.965 RUH Desc #004: RUH Type: Initially Isolated 00:10:54.965 RUH Desc #005: RUH Type: Initially Isolated 00:10:54.965 RUH Desc #006: RUH Type: Initially Isolated 00:10:54.965 RUH Desc #007: RUH Type: Initially Isolated 00:10:54.965 00:10:54.965 FDP reclaim unit handle usage log page 00:10:55.223 ====================================== 00:10:55.223 Number of Reclaim Unit Handles: 8 00:10:55.223 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:55.223 RUH Usage Desc #001: RUH Attributes: Unused 00:10:55.223 RUH Usage Desc #002: RUH Attributes: Unused 00:10:55.223 RUH Usage Desc #003: RUH Attributes: Unused 00:10:55.223 RUH Usage Desc #004: RUH Attributes: Unused 00:10:55.223 RUH Usage Desc #005: RUH Attributes: Unused 00:10:55.223 RUH Usage Desc #006: RUH Attributes: Unused 00:10:55.224 RUH Usage Desc #007: RUH Attributes: Unused 00:10:55.224 00:10:55.224 FDP statistics log page 00:10:55.224 ======================= 00:10:55.224 Host bytes with metadata written: 421502976 00:10:55.224 Media bytes with metadata written: 421548032 00:10:55.224 Media bytes erased: 0 00:10:55.224 00:10:55.224 FDP events log page 00:10:55.224 =================== 00:10:55.224 Number of FDP events: 0 00:10:55.224 00:10:55.224 NVM Specific Namespace Data 00:10:55.224 =========================== 00:10:55.224 Logical Block Storage Tag Mask: 0 00:10:55.224 Protection Information Capabilities: 00:10:55.224 16b Guard Protection Information Storage Tag Support: No 00:10:55.224 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:55.224 Storage Tag Check Read Support: No 00:10:55.224 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.224 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.224 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.224 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.224 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.224 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.224 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.224 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.224 00:10:55.224 real 0m1.717s 00:10:55.224 user 0m0.671s 00:10:55.224 sys 0m0.831s 00:10:55.224 11:35:54 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.224 11:35:54 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:55.224 ************************************ 00:10:55.224 END TEST nvme_identify 00:10:55.224 ************************************ 00:10:55.224 11:35:54 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:55.224 11:35:54 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:55.224 11:35:54 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.224 11:35:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:55.224 ************************************ 00:10:55.224 START TEST nvme_perf 00:10:55.224 ************************************ 00:10:55.224 11:35:54 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:10:55.224 11:35:54 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:56.629 Initializing NVMe Controllers 00:10:56.629 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:56.629 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:56.629 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:56.629 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:56.629 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:56.629 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:56.629 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:56.629 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:56.629 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:56.629 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:56.629 Initialization complete. Launching workers. 00:10:56.629 ======================================================== 00:10:56.629 Latency(us) 00:10:56.629 Device Information : IOPS MiB/s Average min max 00:10:56.629 PCIE (0000:00:10.0) NSID 1 from core 0: 12144.87 142.32 10560.02 7941.14 50512.65 00:10:56.629 PCIE (0000:00:11.0) NSID 1 from core 0: 12144.87 142.32 10534.05 7818.20 47561.33 00:10:56.629 PCIE (0000:00:13.0) NSID 1 from core 0: 12144.87 142.32 10505.73 7948.07 44973.93 00:10:56.629 PCIE (0000:00:12.0) NSID 1 from core 0: 12144.87 142.32 10477.49 8007.08 41870.51 00:10:56.629 PCIE (0000:00:12.0) NSID 2 from core 0: 12208.79 143.07 10394.45 8054.15 33576.83 00:10:56.629 PCIE (0000:00:12.0) NSID 3 from core 0: 12208.79 143.07 10366.12 8019.26 30596.80 00:10:56.629 ======================================================== 00:10:56.629 Total : 72997.05 855.43 10472.82 7818.20 50512.65 00:10:56.629 00:10:56.629 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:56.629 ================================================================================= 00:10:56.629 1.00000% : 8221.789us 00:10:56.629 10.00000% : 8757.993us 00:10:56.629 25.00000% : 9413.353us 00:10:56.629 50.00000% : 10128.291us 00:10:56.629 75.00000% : 10843.229us 00:10:56.629 90.00000% : 12034.793us 00:10:56.629 95.00000% : 12570.996us 00:10:56.629 98.00000% : 13226.356us 00:10:56.629 99.00000% : 40513.164us 00:10:56.629 99.50000% : 47900.858us 00:10:56.629 99.90000% : 50045.673us 00:10:56.629 99.99000% : 50522.298us 00:10:56.629 99.99900% : 50522.298us 00:10:56.629 99.99990% : 50522.298us 00:10:56.629 99.99999% : 50522.298us 00:10:56.629 00:10:56.629 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:56.629 ================================================================================= 00:10:56.629 1.00000% : 8281.367us 00:10:56.629 10.00000% : 8757.993us 00:10:56.629 25.00000% : 9472.931us 00:10:56.629 50.00000% : 10128.291us 00:10:56.629 75.00000% : 10724.073us 00:10:56.629 90.00000% : 12094.371us 00:10:56.629 95.00000% : 12511.418us 00:10:56.629 98.00000% : 13345.513us 00:10:56.629 99.00000% : 37653.411us 00:10:56.629 99.50000% : 45279.418us 00:10:56.629 99.90000% : 47185.920us 00:10:56.629 99.99000% : 47662.545us 00:10:56.629 99.99900% : 47662.545us 00:10:56.629 99.99990% : 47662.545us 00:10:56.629 99.99999% : 47662.545us 00:10:56.629 00:10:56.629 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:56.629 ================================================================================= 00:10:56.629 1.00000% : 8281.367us 00:10:56.629 10.00000% : 8757.993us 00:10:56.629 25.00000% : 9472.931us 00:10:56.629 50.00000% : 10128.291us 00:10:56.629 75.00000% : 10724.073us 00:10:56.629 90.00000% : 12034.793us 00:10:56.629 95.00000% : 12511.418us 00:10:56.629 98.00000% : 13166.778us 00:10:56.629 99.00000% : 35270.284us 00:10:56.629 99.50000% : 42657.978us 00:10:56.629 99.90000% : 44564.480us 00:10:56.629 99.99000% : 45041.105us 00:10:56.629 99.99900% : 45041.105us 00:10:56.629 99.99990% : 45041.105us 00:10:56.629 99.99999% : 45041.105us 00:10:56.629 00:10:56.629 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:56.629 ================================================================================= 00:10:56.629 1.00000% : 8281.367us 00:10:56.629 10.00000% : 8757.993us 00:10:56.629 25.00000% : 9472.931us 00:10:56.629 50.00000% : 10128.291us 00:10:56.629 75.00000% : 10724.073us 00:10:56.629 90.00000% : 12034.793us 00:10:56.629 95.00000% : 12511.418us 00:10:56.629 98.00000% : 13107.200us 00:10:56.629 99.00000% : 32410.531us 00:10:56.629 99.50000% : 39559.913us 00:10:56.629 99.90000% : 41466.415us 00:10:56.629 99.99000% : 41943.040us 00:10:56.629 99.99900% : 41943.040us 00:10:56.629 99.99990% : 41943.040us 00:10:56.629 99.99999% : 41943.040us 00:10:56.629 00:10:56.629 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:56.629 ================================================================================= 00:10:56.629 1.00000% : 8281.367us 00:10:56.629 10.00000% : 8757.993us 00:10:56.629 25.00000% : 9472.931us 00:10:56.629 50.00000% : 10128.291us 00:10:56.629 75.00000% : 10783.651us 00:10:56.629 90.00000% : 12034.793us 00:10:56.629 95.00000% : 12451.840us 00:10:56.629 98.00000% : 13226.356us 00:10:56.629 99.00000% : 23592.960us 00:10:56.629 99.50000% : 31218.967us 00:10:56.629 99.90000% : 33125.469us 00:10:56.629 99.99000% : 33602.095us 00:10:56.630 99.99900% : 33602.095us 00:10:56.630 99.99990% : 33602.095us 00:10:56.630 99.99999% : 33602.095us 00:10:56.630 00:10:56.630 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:56.630 ================================================================================= 00:10:56.630 1.00000% : 8281.367us 00:10:56.630 10.00000% : 8757.993us 00:10:56.630 25.00000% : 9472.931us 00:10:56.630 50.00000% : 10187.869us 00:10:56.630 75.00000% : 10783.651us 00:10:56.630 90.00000% : 12034.793us 00:10:56.630 95.00000% : 12511.418us 00:10:56.630 98.00000% : 13345.513us 00:10:56.630 99.00000% : 20614.051us 00:10:56.630 99.50000% : 28120.902us 00:10:56.630 99.90000% : 30146.560us 00:10:56.630 99.99000% : 30742.342us 00:10:56.630 99.99900% : 30742.342us 00:10:56.630 99.99990% : 30742.342us 00:10:56.630 99.99999% : 30742.342us 00:10:56.630 00:10:56.630 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:56.630 ============================================================================== 00:10:56.630 Range in us Cumulative IO count 00:10:56.630 7923.898 - 7983.476: 0.0164% ( 2) 00:10:56.630 7983.476 - 8043.055: 0.1151% ( 12) 00:10:56.630 8043.055 - 8102.633: 0.3618% ( 30) 00:10:56.630 8102.633 - 8162.211: 0.8306% ( 57) 00:10:56.630 8162.211 - 8221.789: 1.3980% ( 69) 00:10:56.630 8221.789 - 8281.367: 2.2451% ( 103) 00:10:56.630 8281.367 - 8340.945: 3.0757% ( 101) 00:10:56.630 8340.945 - 8400.524: 4.0214% ( 115) 00:10:56.630 8400.524 - 8460.102: 5.0082% ( 120) 00:10:56.630 8460.102 - 8519.680: 6.0609% ( 128) 00:10:56.630 8519.680 - 8579.258: 7.2039% ( 139) 00:10:56.630 8579.258 - 8638.836: 8.3059% ( 134) 00:10:56.630 8638.836 - 8698.415: 9.4737% ( 142) 00:10:56.630 8698.415 - 8757.993: 10.5263% ( 128) 00:10:56.630 8757.993 - 8817.571: 11.6530% ( 137) 00:10:56.630 8817.571 - 8877.149: 12.8701% ( 148) 00:10:56.630 8877.149 - 8936.727: 14.0296% ( 141) 00:10:56.630 8936.727 - 8996.305: 15.3454% ( 160) 00:10:56.630 8996.305 - 9055.884: 16.6118% ( 154) 00:10:56.630 9055.884 - 9115.462: 17.8865% ( 155) 00:10:56.630 9115.462 - 9175.040: 19.2516% ( 166) 00:10:56.630 9175.040 - 9234.618: 20.6003% ( 164) 00:10:56.630 9234.618 - 9294.196: 22.0641% ( 178) 00:10:56.630 9294.196 - 9353.775: 23.4704% ( 171) 00:10:56.630 9353.775 - 9413.353: 25.1316% ( 202) 00:10:56.630 9413.353 - 9472.931: 26.8997% ( 215) 00:10:56.630 9472.931 - 9532.509: 28.7993% ( 231) 00:10:56.630 9532.509 - 9592.087: 31.0197% ( 270) 00:10:56.630 9592.087 - 9651.665: 33.0592% ( 248) 00:10:56.630 9651.665 - 9711.244: 35.2303% ( 264) 00:10:56.630 9711.244 - 9770.822: 37.4424% ( 269) 00:10:56.630 9770.822 - 9830.400: 39.5312% ( 254) 00:10:56.630 9830.400 - 9889.978: 41.7434% ( 269) 00:10:56.630 9889.978 - 9949.556: 43.9967% ( 274) 00:10:56.630 9949.556 - 10009.135: 46.3734% ( 289) 00:10:56.630 10009.135 - 10068.713: 48.7664% ( 291) 00:10:56.630 10068.713 - 10128.291: 51.0444% ( 277) 00:10:56.630 10128.291 - 10187.869: 53.3388% ( 279) 00:10:56.630 10187.869 - 10247.447: 55.7237% ( 290) 00:10:56.630 10247.447 - 10307.025: 58.1743% ( 298) 00:10:56.630 10307.025 - 10366.604: 60.5181% ( 285) 00:10:56.630 10366.604 - 10426.182: 62.8372% ( 282) 00:10:56.630 10426.182 - 10485.760: 65.1316% ( 279) 00:10:56.630 10485.760 - 10545.338: 67.3766% ( 273) 00:10:56.630 10545.338 - 10604.916: 69.3339% ( 238) 00:10:56.630 10604.916 - 10664.495: 71.3569% ( 246) 00:10:56.630 10664.495 - 10724.073: 73.1908% ( 223) 00:10:56.630 10724.073 - 10783.651: 74.8438% ( 201) 00:10:56.630 10783.651 - 10843.229: 76.2829% ( 175) 00:10:56.630 10843.229 - 10902.807: 77.3355% ( 128) 00:10:56.630 10902.807 - 10962.385: 78.2401% ( 110) 00:10:56.630 10962.385 - 11021.964: 78.9391% ( 85) 00:10:56.630 11021.964 - 11081.542: 79.6628% ( 88) 00:10:56.630 11081.542 - 11141.120: 80.3865% ( 88) 00:10:56.630 11141.120 - 11200.698: 81.1266% ( 90) 00:10:56.630 11200.698 - 11260.276: 81.8668% ( 90) 00:10:56.630 11260.276 - 11319.855: 82.5740% ( 86) 00:10:56.630 11319.855 - 11379.433: 83.2812% ( 86) 00:10:56.630 11379.433 - 11439.011: 84.0214% ( 90) 00:10:56.630 11439.011 - 11498.589: 84.6546% ( 77) 00:10:56.630 11498.589 - 11558.167: 85.2878% ( 77) 00:10:56.630 11558.167 - 11617.745: 85.9539% ( 81) 00:10:56.630 11617.745 - 11677.324: 86.5132% ( 68) 00:10:56.630 11677.324 - 11736.902: 87.1546% ( 78) 00:10:56.630 11736.902 - 11796.480: 87.7714% ( 75) 00:10:56.630 11796.480 - 11856.058: 88.3306% ( 68) 00:10:56.630 11856.058 - 11915.636: 88.9391% ( 74) 00:10:56.630 11915.636 - 11975.215: 89.5066% ( 69) 00:10:56.630 11975.215 - 12034.793: 90.0493% ( 66) 00:10:56.630 12034.793 - 12094.371: 90.6250% ( 70) 00:10:56.630 12094.371 - 12153.949: 91.2664% ( 78) 00:10:56.630 12153.949 - 12213.527: 91.8586% ( 72) 00:10:56.630 12213.527 - 12273.105: 92.4260% ( 69) 00:10:56.630 12273.105 - 12332.684: 93.0592% ( 77) 00:10:56.630 12332.684 - 12392.262: 93.6678% ( 74) 00:10:56.630 12392.262 - 12451.840: 94.2516% ( 71) 00:10:56.630 12451.840 - 12511.418: 94.8520% ( 73) 00:10:56.630 12511.418 - 12570.996: 95.3618% ( 62) 00:10:56.630 12570.996 - 12630.575: 95.8799% ( 63) 00:10:56.630 12630.575 - 12690.153: 96.2911% ( 50) 00:10:56.630 12690.153 - 12749.731: 96.5461% ( 31) 00:10:56.630 12749.731 - 12809.309: 96.8586% ( 38) 00:10:56.630 12809.309 - 12868.887: 97.0559% ( 24) 00:10:56.630 12868.887 - 12928.465: 97.3026% ( 30) 00:10:56.630 12928.465 - 12988.044: 97.4918% ( 23) 00:10:56.630 12988.044 - 13047.622: 97.6645% ( 21) 00:10:56.630 13047.622 - 13107.200: 97.8207% ( 19) 00:10:56.630 13107.200 - 13166.778: 97.9276% ( 13) 00:10:56.630 13166.778 - 13226.356: 98.0016% ( 9) 00:10:56.630 13226.356 - 13285.935: 98.1086% ( 13) 00:10:56.630 13285.935 - 13345.513: 98.1990% ( 11) 00:10:56.630 13345.513 - 13405.091: 98.2566% ( 7) 00:10:56.630 13405.091 - 13464.669: 98.3388% ( 10) 00:10:56.630 13464.669 - 13524.247: 98.4539% ( 14) 00:10:56.630 13524.247 - 13583.825: 98.4951% ( 5) 00:10:56.630 13583.825 - 13643.404: 98.5609% ( 8) 00:10:56.630 13643.404 - 13702.982: 98.6102% ( 6) 00:10:56.630 13702.982 - 13762.560: 98.6842% ( 9) 00:10:56.630 13762.560 - 13822.138: 98.7089% ( 3) 00:10:56.630 13822.138 - 13881.716: 98.7418% ( 4) 00:10:56.630 13881.716 - 13941.295: 98.7664% ( 3) 00:10:56.630 13941.295 - 14000.873: 98.8240% ( 7) 00:10:56.630 14000.873 - 14060.451: 98.8405% ( 2) 00:10:56.630 14060.451 - 14120.029: 98.8734% ( 4) 00:10:56.630 14120.029 - 14179.607: 98.8816% ( 1) 00:10:56.630 14179.607 - 14239.185: 98.9062% ( 3) 00:10:56.630 14239.185 - 14298.764: 98.9227% ( 2) 00:10:56.630 14298.764 - 14358.342: 98.9474% ( 3) 00:10:56.630 40036.538 - 40274.851: 98.9885% ( 5) 00:10:56.630 40274.851 - 40513.164: 99.0296% ( 5) 00:10:56.630 40513.164 - 40751.476: 99.0707% ( 5) 00:10:56.630 40751.476 - 40989.789: 99.1118% ( 5) 00:10:56.630 40989.789 - 41228.102: 99.1530% ( 5) 00:10:56.630 41228.102 - 41466.415: 99.1941% ( 5) 00:10:56.630 41466.415 - 41704.727: 99.2434% ( 6) 00:10:56.630 41704.727 - 41943.040: 99.2928% ( 6) 00:10:56.630 41943.040 - 42181.353: 99.3421% ( 6) 00:10:56.630 42181.353 - 42419.665: 99.3832% ( 5) 00:10:56.630 42419.665 - 42657.978: 99.4243% ( 5) 00:10:56.630 42657.978 - 42896.291: 99.4737% ( 6) 00:10:56.630 47424.233 - 47662.545: 99.4819% ( 1) 00:10:56.630 47662.545 - 47900.858: 99.5148% ( 4) 00:10:56.630 47900.858 - 48139.171: 99.5559% ( 5) 00:10:56.630 48139.171 - 48377.484: 99.6053% ( 6) 00:10:56.630 48377.484 - 48615.796: 99.6464% ( 5) 00:10:56.630 48615.796 - 48854.109: 99.6957% ( 6) 00:10:56.630 48854.109 - 49092.422: 99.7368% ( 5) 00:10:56.630 49092.422 - 49330.735: 99.7780% ( 5) 00:10:56.630 49330.735 - 49569.047: 99.8191% ( 5) 00:10:56.630 49569.047 - 49807.360: 99.8684% ( 6) 00:10:56.630 49807.360 - 50045.673: 99.9178% ( 6) 00:10:56.630 50045.673 - 50283.985: 99.9671% ( 6) 00:10:56.630 50283.985 - 50522.298: 100.0000% ( 4) 00:10:56.630 00:10:56.630 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:56.630 ============================================================================== 00:10:56.630 Range in us Cumulative IO count 00:10:56.630 7804.742 - 7864.320: 0.0247% ( 3) 00:10:56.630 7864.320 - 7923.898: 0.0411% ( 2) 00:10:56.630 7923.898 - 7983.476: 0.0740% ( 4) 00:10:56.630 7983.476 - 8043.055: 0.1316% ( 7) 00:10:56.631 8043.055 - 8102.633: 0.2056% ( 9) 00:10:56.631 8102.633 - 8162.211: 0.3947% ( 23) 00:10:56.631 8162.211 - 8221.789: 0.7895% ( 48) 00:10:56.631 8221.789 - 8281.367: 1.3569% ( 69) 00:10:56.631 8281.367 - 8340.945: 2.2533% ( 109) 00:10:56.631 8340.945 - 8400.524: 3.1497% ( 109) 00:10:56.631 8400.524 - 8460.102: 4.2845% ( 138) 00:10:56.631 8460.102 - 8519.680: 5.4934% ( 147) 00:10:56.631 8519.680 - 8579.258: 6.8010% ( 159) 00:10:56.631 8579.258 - 8638.836: 8.1086% ( 159) 00:10:56.631 8638.836 - 8698.415: 9.4901% ( 168) 00:10:56.631 8698.415 - 8757.993: 10.8224% ( 162) 00:10:56.631 8757.993 - 8817.571: 12.1957% ( 167) 00:10:56.631 8817.571 - 8877.149: 13.5197% ( 161) 00:10:56.631 8877.149 - 8936.727: 14.9013% ( 168) 00:10:56.631 8936.727 - 8996.305: 16.2664% ( 166) 00:10:56.631 8996.305 - 9055.884: 17.6398% ( 167) 00:10:56.631 9055.884 - 9115.462: 18.9885% ( 164) 00:10:56.631 9115.462 - 9175.040: 20.1974% ( 147) 00:10:56.631 9175.040 - 9234.618: 21.2418% ( 127) 00:10:56.631 9234.618 - 9294.196: 22.1628% ( 112) 00:10:56.631 9294.196 - 9353.775: 23.1332% ( 118) 00:10:56.631 9353.775 - 9413.353: 24.1776% ( 127) 00:10:56.631 9413.353 - 9472.931: 25.3865% ( 147) 00:10:56.631 9472.931 - 9532.509: 26.7516% ( 166) 00:10:56.631 9532.509 - 9592.087: 28.3799% ( 198) 00:10:56.631 9592.087 - 9651.665: 30.2056% ( 222) 00:10:56.631 9651.665 - 9711.244: 32.3684% ( 263) 00:10:56.631 9711.244 - 9770.822: 34.6135% ( 273) 00:10:56.631 9770.822 - 9830.400: 37.1053% ( 303) 00:10:56.631 9830.400 - 9889.978: 39.7286% ( 319) 00:10:56.631 9889.978 - 9949.556: 42.3438% ( 318) 00:10:56.631 9949.556 - 10009.135: 45.0411% ( 328) 00:10:56.631 10009.135 - 10068.713: 47.6562% ( 318) 00:10:56.631 10068.713 - 10128.291: 50.3947% ( 333) 00:10:56.631 10128.291 - 10187.869: 53.1250% ( 332) 00:10:56.631 10187.869 - 10247.447: 55.9704% ( 346) 00:10:56.631 10247.447 - 10307.025: 58.8158% ( 346) 00:10:56.631 10307.025 - 10366.604: 61.5954% ( 338) 00:10:56.631 10366.604 - 10426.182: 64.4079% ( 342) 00:10:56.631 10426.182 - 10485.760: 67.0724% ( 324) 00:10:56.631 10485.760 - 10545.338: 69.6053% ( 308) 00:10:56.631 10545.338 - 10604.916: 71.9408% ( 284) 00:10:56.631 10604.916 - 10664.495: 73.9391% ( 243) 00:10:56.631 10664.495 - 10724.073: 75.6497% ( 208) 00:10:56.631 10724.073 - 10783.651: 76.9984% ( 164) 00:10:56.631 10783.651 - 10843.229: 77.9688% ( 118) 00:10:56.631 10843.229 - 10902.807: 78.7171% ( 91) 00:10:56.631 10902.807 - 10962.385: 79.3914% ( 82) 00:10:56.631 10962.385 - 11021.964: 79.9836% ( 72) 00:10:56.631 11021.964 - 11081.542: 80.4441% ( 56) 00:10:56.631 11081.542 - 11141.120: 80.8964% ( 55) 00:10:56.631 11141.120 - 11200.698: 81.3322% ( 53) 00:10:56.631 11200.698 - 11260.276: 81.7845% ( 55) 00:10:56.631 11260.276 - 11319.855: 82.2533% ( 57) 00:10:56.631 11319.855 - 11379.433: 82.7796% ( 64) 00:10:56.631 11379.433 - 11439.011: 83.3224% ( 66) 00:10:56.631 11439.011 - 11498.589: 83.9145% ( 72) 00:10:56.631 11498.589 - 11558.167: 84.5066% ( 72) 00:10:56.631 11558.167 - 11617.745: 85.0576% ( 67) 00:10:56.631 11617.745 - 11677.324: 85.7072% ( 79) 00:10:56.631 11677.324 - 11736.902: 86.3076% ( 73) 00:10:56.631 11736.902 - 11796.480: 86.9572% ( 79) 00:10:56.631 11796.480 - 11856.058: 87.6562% ( 85) 00:10:56.631 11856.058 - 11915.636: 88.3882% ( 89) 00:10:56.631 11915.636 - 11975.215: 89.1530% ( 93) 00:10:56.631 11975.215 - 12034.793: 89.9424% ( 96) 00:10:56.631 12034.793 - 12094.371: 90.6826% ( 90) 00:10:56.631 12094.371 - 12153.949: 91.4391% ( 92) 00:10:56.631 12153.949 - 12213.527: 92.1957% ( 92) 00:10:56.631 12213.527 - 12273.105: 92.9359% ( 90) 00:10:56.631 12273.105 - 12332.684: 93.5938% ( 80) 00:10:56.631 12332.684 - 12392.262: 94.2434% ( 79) 00:10:56.631 12392.262 - 12451.840: 94.8191% ( 70) 00:10:56.631 12451.840 - 12511.418: 95.3207% ( 61) 00:10:56.631 12511.418 - 12570.996: 95.7648% ( 54) 00:10:56.631 12570.996 - 12630.575: 96.0938% ( 40) 00:10:56.631 12630.575 - 12690.153: 96.3651% ( 33) 00:10:56.631 12690.153 - 12749.731: 96.6694% ( 37) 00:10:56.631 12749.731 - 12809.309: 96.8750% ( 25) 00:10:56.631 12809.309 - 12868.887: 97.0724% ( 24) 00:10:56.631 12868.887 - 12928.465: 97.2615% ( 23) 00:10:56.631 12928.465 - 12988.044: 97.4342% ( 21) 00:10:56.631 12988.044 - 13047.622: 97.5822% ( 18) 00:10:56.631 13047.622 - 13107.200: 97.6727% ( 11) 00:10:56.631 13107.200 - 13166.778: 97.7796% ( 13) 00:10:56.631 13166.778 - 13226.356: 97.8701% ( 11) 00:10:56.631 13226.356 - 13285.935: 97.9688% ( 12) 00:10:56.631 13285.935 - 13345.513: 98.0592% ( 11) 00:10:56.631 13345.513 - 13405.091: 98.1168% ( 7) 00:10:56.631 13405.091 - 13464.669: 98.1990% ( 10) 00:10:56.631 13464.669 - 13524.247: 98.2566% ( 7) 00:10:56.631 13524.247 - 13583.825: 98.3470% ( 11) 00:10:56.631 13583.825 - 13643.404: 98.4128% ( 8) 00:10:56.631 13643.404 - 13702.982: 98.4786% ( 8) 00:10:56.631 13702.982 - 13762.560: 98.5526% ( 9) 00:10:56.631 13762.560 - 13822.138: 98.5938% ( 5) 00:10:56.631 13822.138 - 13881.716: 98.6431% ( 6) 00:10:56.631 13881.716 - 13941.295: 98.6760% ( 4) 00:10:56.631 13941.295 - 14000.873: 98.7171% ( 5) 00:10:56.631 14000.873 - 14060.451: 98.7664% ( 6) 00:10:56.631 14060.451 - 14120.029: 98.8076% ( 5) 00:10:56.631 14120.029 - 14179.607: 98.8487% ( 5) 00:10:56.631 14179.607 - 14239.185: 98.8898% ( 5) 00:10:56.631 14239.185 - 14298.764: 98.9309% ( 5) 00:10:56.631 14298.764 - 14358.342: 98.9474% ( 2) 00:10:56.631 37176.785 - 37415.098: 98.9720% ( 3) 00:10:56.631 37415.098 - 37653.411: 99.0132% ( 5) 00:10:56.631 37653.411 - 37891.724: 99.0625% ( 6) 00:10:56.631 37891.724 - 38130.036: 99.1036% ( 5) 00:10:56.631 38130.036 - 38368.349: 99.1530% ( 6) 00:10:56.631 38368.349 - 38606.662: 99.2023% ( 6) 00:10:56.631 38606.662 - 38844.975: 99.2516% ( 6) 00:10:56.631 38844.975 - 39083.287: 99.3010% ( 6) 00:10:56.631 39083.287 - 39321.600: 99.3421% ( 5) 00:10:56.631 39321.600 - 39559.913: 99.3914% ( 6) 00:10:56.631 39559.913 - 39798.225: 99.4326% ( 5) 00:10:56.631 39798.225 - 40036.538: 99.4737% ( 5) 00:10:56.631 44802.793 - 45041.105: 99.4901% ( 2) 00:10:56.631 45041.105 - 45279.418: 99.5395% ( 6) 00:10:56.631 45279.418 - 45517.731: 99.5888% ( 6) 00:10:56.631 45517.731 - 45756.044: 99.6299% ( 5) 00:10:56.631 45756.044 - 45994.356: 99.6793% ( 6) 00:10:56.631 45994.356 - 46232.669: 99.7286% ( 6) 00:10:56.631 46232.669 - 46470.982: 99.7780% ( 6) 00:10:56.631 46470.982 - 46709.295: 99.8273% ( 6) 00:10:56.631 46709.295 - 46947.607: 99.8684% ( 5) 00:10:56.631 46947.607 - 47185.920: 99.9178% ( 6) 00:10:56.631 47185.920 - 47424.233: 99.9671% ( 6) 00:10:56.631 47424.233 - 47662.545: 100.0000% ( 4) 00:10:56.631 00:10:56.631 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:56.631 ============================================================================== 00:10:56.631 Range in us Cumulative IO count 00:10:56.631 7923.898 - 7983.476: 0.0247% ( 3) 00:10:56.631 7983.476 - 8043.055: 0.0822% ( 7) 00:10:56.631 8043.055 - 8102.633: 0.1809% ( 12) 00:10:56.632 8102.633 - 8162.211: 0.3701% ( 23) 00:10:56.632 8162.211 - 8221.789: 0.6414% ( 33) 00:10:56.632 8221.789 - 8281.367: 1.1595% ( 63) 00:10:56.632 8281.367 - 8340.945: 1.8174% ( 80) 00:10:56.632 8340.945 - 8400.524: 2.7467% ( 113) 00:10:56.632 8400.524 - 8460.102: 3.8487% ( 134) 00:10:56.632 8460.102 - 8519.680: 5.0329% ( 144) 00:10:56.632 8519.680 - 8579.258: 6.2829% ( 152) 00:10:56.632 8579.258 - 8638.836: 7.5576% ( 155) 00:10:56.632 8638.836 - 8698.415: 8.8487% ( 157) 00:10:56.632 8698.415 - 8757.993: 10.2220% ( 167) 00:10:56.632 8757.993 - 8817.571: 11.6118% ( 169) 00:10:56.632 8817.571 - 8877.149: 13.0510% ( 175) 00:10:56.632 8877.149 - 8936.727: 14.4901% ( 175) 00:10:56.632 8936.727 - 8996.305: 15.8388% ( 164) 00:10:56.632 8996.305 - 9055.884: 17.2533% ( 172) 00:10:56.632 9055.884 - 9115.462: 18.5938% ( 163) 00:10:56.632 9115.462 - 9175.040: 19.8438% ( 152) 00:10:56.632 9175.040 - 9234.618: 20.9951% ( 140) 00:10:56.632 9234.618 - 9294.196: 22.1053% ( 135) 00:10:56.632 9294.196 - 9353.775: 23.2319% ( 137) 00:10:56.632 9353.775 - 9413.353: 24.2845% ( 128) 00:10:56.632 9413.353 - 9472.931: 25.5592% ( 155) 00:10:56.632 9472.931 - 9532.509: 26.9408% ( 168) 00:10:56.632 9532.509 - 9592.087: 28.4951% ( 189) 00:10:56.632 9592.087 - 9651.665: 30.3865% ( 230) 00:10:56.632 9651.665 - 9711.244: 32.4753% ( 254) 00:10:56.632 9711.244 - 9770.822: 34.8766% ( 292) 00:10:56.632 9770.822 - 9830.400: 37.3602% ( 302) 00:10:56.632 9830.400 - 9889.978: 39.9671% ( 317) 00:10:56.632 9889.978 - 9949.556: 42.5000% ( 308) 00:10:56.632 9949.556 - 10009.135: 45.1974% ( 328) 00:10:56.632 10009.135 - 10068.713: 47.8783% ( 326) 00:10:56.632 10068.713 - 10128.291: 50.6168% ( 333) 00:10:56.632 10128.291 - 10187.869: 53.4704% ( 347) 00:10:56.632 10187.869 - 10247.447: 56.2582% ( 339) 00:10:56.632 10247.447 - 10307.025: 59.1201% ( 348) 00:10:56.632 10307.025 - 10366.604: 61.9243% ( 341) 00:10:56.632 10366.604 - 10426.182: 64.6464% ( 331) 00:10:56.632 10426.182 - 10485.760: 67.2862% ( 321) 00:10:56.632 10485.760 - 10545.338: 69.7039% ( 294) 00:10:56.632 10545.338 - 10604.916: 71.8257% ( 258) 00:10:56.632 10604.916 - 10664.495: 73.7418% ( 233) 00:10:56.632 10664.495 - 10724.073: 75.4523% ( 208) 00:10:56.632 10724.073 - 10783.651: 76.8010% ( 164) 00:10:56.632 10783.651 - 10843.229: 77.8372% ( 126) 00:10:56.632 10843.229 - 10902.807: 78.6678% ( 101) 00:10:56.632 10902.807 - 10962.385: 79.3010% ( 77) 00:10:56.632 10962.385 - 11021.964: 79.8849% ( 71) 00:10:56.632 11021.964 - 11081.542: 80.4194% ( 65) 00:10:56.632 11081.542 - 11141.120: 80.9786% ( 68) 00:10:56.632 11141.120 - 11200.698: 81.5132% ( 65) 00:10:56.632 11200.698 - 11260.276: 82.0312% ( 63) 00:10:56.632 11260.276 - 11319.855: 82.5329% ( 61) 00:10:56.632 11319.855 - 11379.433: 83.0592% ( 64) 00:10:56.632 11379.433 - 11439.011: 83.6513% ( 72) 00:10:56.632 11439.011 - 11498.589: 84.2188% ( 69) 00:10:56.632 11498.589 - 11558.167: 84.8026% ( 71) 00:10:56.632 11558.167 - 11617.745: 85.3947% ( 72) 00:10:56.632 11617.745 - 11677.324: 85.9622% ( 69) 00:10:56.632 11677.324 - 11736.902: 86.5872% ( 76) 00:10:56.632 11736.902 - 11796.480: 87.2368% ( 79) 00:10:56.632 11796.480 - 11856.058: 87.9359% ( 85) 00:10:56.632 11856.058 - 11915.636: 88.6349% ( 85) 00:10:56.632 11915.636 - 11975.215: 89.3010% ( 81) 00:10:56.632 11975.215 - 12034.793: 90.0329% ( 89) 00:10:56.632 12034.793 - 12094.371: 90.7155% ( 83) 00:10:56.632 12094.371 - 12153.949: 91.4638% ( 91) 00:10:56.632 12153.949 - 12213.527: 92.1464% ( 83) 00:10:56.632 12213.527 - 12273.105: 92.8536% ( 86) 00:10:56.632 12273.105 - 12332.684: 93.5197% ( 81) 00:10:56.632 12332.684 - 12392.262: 94.1118% ( 72) 00:10:56.632 12392.262 - 12451.840: 94.7039% ( 72) 00:10:56.632 12451.840 - 12511.418: 95.1891% ( 59) 00:10:56.632 12511.418 - 12570.996: 95.6743% ( 59) 00:10:56.632 12570.996 - 12630.575: 96.1102% ( 53) 00:10:56.632 12630.575 - 12690.153: 96.4474% ( 41) 00:10:56.632 12690.153 - 12749.731: 96.7845% ( 41) 00:10:56.632 12749.731 - 12809.309: 97.0888% ( 37) 00:10:56.632 12809.309 - 12868.887: 97.3355% ( 30) 00:10:56.632 12868.887 - 12928.465: 97.5493% ( 26) 00:10:56.632 12928.465 - 12988.044: 97.7220% ( 21) 00:10:56.632 12988.044 - 13047.622: 97.8701% ( 18) 00:10:56.632 13047.622 - 13107.200: 97.9852% ( 14) 00:10:56.632 13107.200 - 13166.778: 98.0921% ( 13) 00:10:56.632 13166.778 - 13226.356: 98.2237% ( 16) 00:10:56.632 13226.356 - 13285.935: 98.2812% ( 7) 00:10:56.632 13285.935 - 13345.513: 98.3306% ( 6) 00:10:56.632 13345.513 - 13405.091: 98.3799% ( 6) 00:10:56.632 13405.091 - 13464.669: 98.4293% ( 6) 00:10:56.632 13464.669 - 13524.247: 98.4786% ( 6) 00:10:56.632 13524.247 - 13583.825: 98.5197% ( 5) 00:10:56.632 13583.825 - 13643.404: 98.5691% ( 6) 00:10:56.632 13643.404 - 13702.982: 98.6184% ( 6) 00:10:56.632 13702.982 - 13762.560: 98.6678% ( 6) 00:10:56.632 13762.560 - 13822.138: 98.7171% ( 6) 00:10:56.632 13822.138 - 13881.716: 98.7747% ( 7) 00:10:56.632 13881.716 - 13941.295: 98.8076% ( 4) 00:10:56.632 13941.295 - 14000.873: 98.8487% ( 5) 00:10:56.632 14000.873 - 14060.451: 98.8734% ( 3) 00:10:56.632 14060.451 - 14120.029: 98.8980% ( 3) 00:10:56.632 14120.029 - 14179.607: 98.9227% ( 3) 00:10:56.632 14179.607 - 14239.185: 98.9391% ( 2) 00:10:56.632 14239.185 - 14298.764: 98.9474% ( 1) 00:10:56.632 34793.658 - 35031.971: 98.9720% ( 3) 00:10:56.632 35031.971 - 35270.284: 99.0049% ( 4) 00:10:56.632 35270.284 - 35508.596: 99.0543% ( 6) 00:10:56.632 35508.596 - 35746.909: 99.1036% ( 6) 00:10:56.632 35746.909 - 35985.222: 99.1530% ( 6) 00:10:56.632 35985.222 - 36223.535: 99.2023% ( 6) 00:10:56.632 36223.535 - 36461.847: 99.2516% ( 6) 00:10:56.632 36461.847 - 36700.160: 99.2928% ( 5) 00:10:56.632 36700.160 - 36938.473: 99.3421% ( 6) 00:10:56.632 36938.473 - 37176.785: 99.3914% ( 6) 00:10:56.632 37176.785 - 37415.098: 99.4408% ( 6) 00:10:56.632 37415.098 - 37653.411: 99.4737% ( 4) 00:10:56.632 42419.665 - 42657.978: 99.5148% ( 5) 00:10:56.632 42657.978 - 42896.291: 99.5641% ( 6) 00:10:56.632 42896.291 - 43134.604: 99.6053% ( 5) 00:10:56.632 43134.604 - 43372.916: 99.6546% ( 6) 00:10:56.632 43372.916 - 43611.229: 99.7039% ( 6) 00:10:56.632 43611.229 - 43849.542: 99.7533% ( 6) 00:10:56.632 43849.542 - 44087.855: 99.8026% ( 6) 00:10:56.632 44087.855 - 44326.167: 99.8520% ( 6) 00:10:56.632 44326.167 - 44564.480: 99.9013% ( 6) 00:10:56.632 44564.480 - 44802.793: 99.9589% ( 7) 00:10:56.632 44802.793 - 45041.105: 100.0000% ( 5) 00:10:56.632 00:10:56.632 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:56.632 ============================================================================== 00:10:56.632 Range in us Cumulative IO count 00:10:56.632 7983.476 - 8043.055: 0.0493% ( 6) 00:10:56.632 8043.055 - 8102.633: 0.1727% ( 15) 00:10:56.632 8102.633 - 8162.211: 0.3618% ( 23) 00:10:56.632 8162.211 - 8221.789: 0.6743% ( 38) 00:10:56.632 8221.789 - 8281.367: 1.1595% ( 59) 00:10:56.632 8281.367 - 8340.945: 1.8750% ( 87) 00:10:56.632 8340.945 - 8400.524: 2.7632% ( 108) 00:10:56.632 8400.524 - 8460.102: 3.9062% ( 139) 00:10:56.632 8460.102 - 8519.680: 5.1069% ( 146) 00:10:56.632 8519.680 - 8579.258: 6.2829% ( 143) 00:10:56.632 8579.258 - 8638.836: 7.4836% ( 146) 00:10:56.632 8638.836 - 8698.415: 8.8322% ( 164) 00:10:56.632 8698.415 - 8757.993: 10.2632% ( 174) 00:10:56.632 8757.993 - 8817.571: 11.6612% ( 170) 00:10:56.632 8817.571 - 8877.149: 13.0510% ( 169) 00:10:56.632 8877.149 - 8936.727: 14.4901% ( 175) 00:10:56.632 8936.727 - 8996.305: 15.8882% ( 170) 00:10:56.632 8996.305 - 9055.884: 17.2697% ( 168) 00:10:56.632 9055.884 - 9115.462: 18.5444% ( 155) 00:10:56.632 9115.462 - 9175.040: 19.7862% ( 151) 00:10:56.632 9175.040 - 9234.618: 20.9211% ( 138) 00:10:56.632 9234.618 - 9294.196: 21.8997% ( 119) 00:10:56.632 9294.196 - 9353.775: 22.9523% ( 128) 00:10:56.632 9353.775 - 9413.353: 24.1365% ( 144) 00:10:56.632 9413.353 - 9472.931: 25.4030% ( 154) 00:10:56.632 9472.931 - 9532.509: 26.8257% ( 173) 00:10:56.632 9532.509 - 9592.087: 28.4622% ( 199) 00:10:56.632 9592.087 - 9651.665: 30.4112% ( 237) 00:10:56.633 9651.665 - 9711.244: 32.7220% ( 281) 00:10:56.633 9711.244 - 9770.822: 35.0822% ( 287) 00:10:56.633 9770.822 - 9830.400: 37.5082% ( 295) 00:10:56.633 9830.400 - 9889.978: 39.9836% ( 301) 00:10:56.633 9889.978 - 9949.556: 42.5164% ( 308) 00:10:56.633 9949.556 - 10009.135: 45.1151% ( 316) 00:10:56.633 10009.135 - 10068.713: 47.6809% ( 312) 00:10:56.633 10068.713 - 10128.291: 50.4523% ( 337) 00:10:56.633 10128.291 - 10187.869: 53.1661% ( 330) 00:10:56.633 10187.869 - 10247.447: 55.9375% ( 337) 00:10:56.633 10247.447 - 10307.025: 58.7747% ( 345) 00:10:56.633 10307.025 - 10366.604: 61.4803% ( 329) 00:10:56.633 10366.604 - 10426.182: 64.1776% ( 328) 00:10:56.633 10426.182 - 10485.760: 66.7434% ( 312) 00:10:56.633 10485.760 - 10545.338: 69.1447% ( 292) 00:10:56.633 10545.338 - 10604.916: 71.3240% ( 265) 00:10:56.633 10604.916 - 10664.495: 73.2977% ( 240) 00:10:56.633 10664.495 - 10724.073: 75.0329% ( 211) 00:10:56.633 10724.073 - 10783.651: 76.4556% ( 173) 00:10:56.633 10783.651 - 10843.229: 77.4507% ( 121) 00:10:56.633 10843.229 - 10902.807: 78.2730% ( 100) 00:10:56.633 10902.807 - 10962.385: 79.0049% ( 89) 00:10:56.633 10962.385 - 11021.964: 79.6135% ( 74) 00:10:56.633 11021.964 - 11081.542: 80.1316% ( 63) 00:10:56.633 11081.542 - 11141.120: 80.5921% ( 56) 00:10:56.633 11141.120 - 11200.698: 81.0526% ( 56) 00:10:56.633 11200.698 - 11260.276: 81.5543% ( 61) 00:10:56.633 11260.276 - 11319.855: 82.0641% ( 62) 00:10:56.633 11319.855 - 11379.433: 82.6645% ( 73) 00:10:56.633 11379.433 - 11439.011: 83.2812% ( 75) 00:10:56.633 11439.011 - 11498.589: 83.8980% ( 75) 00:10:56.633 11498.589 - 11558.167: 84.5312% ( 77) 00:10:56.633 11558.167 - 11617.745: 85.2714% ( 90) 00:10:56.633 11617.745 - 11677.324: 86.0115% ( 90) 00:10:56.633 11677.324 - 11736.902: 86.7023% ( 84) 00:10:56.633 11736.902 - 11796.480: 87.4013% ( 85) 00:10:56.633 11796.480 - 11856.058: 88.1661% ( 93) 00:10:56.633 11856.058 - 11915.636: 88.8076% ( 78) 00:10:56.633 11915.636 - 11975.215: 89.5312% ( 88) 00:10:56.633 11975.215 - 12034.793: 90.2138% ( 83) 00:10:56.633 12034.793 - 12094.371: 90.8799% ( 81) 00:10:56.633 12094.371 - 12153.949: 91.5543% ( 82) 00:10:56.633 12153.949 - 12213.527: 92.2533% ( 85) 00:10:56.633 12213.527 - 12273.105: 92.9276% ( 82) 00:10:56.633 12273.105 - 12332.684: 93.6266% ( 85) 00:10:56.633 12332.684 - 12392.262: 94.3174% ( 84) 00:10:56.633 12392.262 - 12451.840: 94.9095% ( 72) 00:10:56.633 12451.840 - 12511.418: 95.4441% ( 65) 00:10:56.633 12511.418 - 12570.996: 95.8882% ( 54) 00:10:56.633 12570.996 - 12630.575: 96.2911% ( 49) 00:10:56.633 12630.575 - 12690.153: 96.6530% ( 44) 00:10:56.633 12690.153 - 12749.731: 96.9572% ( 37) 00:10:56.633 12749.731 - 12809.309: 97.2286% ( 33) 00:10:56.633 12809.309 - 12868.887: 97.4013% ( 21) 00:10:56.633 12868.887 - 12928.465: 97.5822% ( 22) 00:10:56.633 12928.465 - 12988.044: 97.7385% ( 19) 00:10:56.633 12988.044 - 13047.622: 97.8783% ( 17) 00:10:56.633 13047.622 - 13107.200: 98.0099% ( 16) 00:10:56.633 13107.200 - 13166.778: 98.1414% ( 16) 00:10:56.633 13166.778 - 13226.356: 98.2319% ( 11) 00:10:56.633 13226.356 - 13285.935: 98.2895% ( 7) 00:10:56.633 13285.935 - 13345.513: 98.3553% ( 8) 00:10:56.633 13345.513 - 13405.091: 98.4128% ( 7) 00:10:56.633 13405.091 - 13464.669: 98.4704% ( 7) 00:10:56.633 13464.669 - 13524.247: 98.5280% ( 7) 00:10:56.633 13524.247 - 13583.825: 98.6102% ( 10) 00:10:56.633 13583.825 - 13643.404: 98.6431% ( 4) 00:10:56.633 13643.404 - 13702.982: 98.6924% ( 6) 00:10:56.633 13702.982 - 13762.560: 98.7336% ( 5) 00:10:56.633 13762.560 - 13822.138: 98.7747% ( 5) 00:10:56.633 13822.138 - 13881.716: 98.8240% ( 6) 00:10:56.633 13881.716 - 13941.295: 98.8651% ( 5) 00:10:56.633 13941.295 - 14000.873: 98.9062% ( 5) 00:10:56.633 14000.873 - 14060.451: 98.9391% ( 4) 00:10:56.633 14060.451 - 14120.029: 98.9474% ( 1) 00:10:56.633 31933.905 - 32172.218: 98.9803% ( 4) 00:10:56.633 32172.218 - 32410.531: 99.0296% ( 6) 00:10:56.633 32410.531 - 32648.844: 99.0789% ( 6) 00:10:56.633 32648.844 - 32887.156: 99.1201% ( 5) 00:10:56.633 32887.156 - 33125.469: 99.1694% ( 6) 00:10:56.633 33125.469 - 33363.782: 99.2105% ( 5) 00:10:56.633 33363.782 - 33602.095: 99.2599% ( 6) 00:10:56.633 33602.095 - 33840.407: 99.3092% ( 6) 00:10:56.633 33840.407 - 34078.720: 99.3586% ( 6) 00:10:56.633 34078.720 - 34317.033: 99.4079% ( 6) 00:10:56.633 34317.033 - 34555.345: 99.4572% ( 6) 00:10:56.633 34555.345 - 34793.658: 99.4737% ( 2) 00:10:56.633 39321.600 - 39559.913: 99.5066% ( 4) 00:10:56.633 39559.913 - 39798.225: 99.5477% ( 5) 00:10:56.633 39798.225 - 40036.538: 99.6053% ( 7) 00:10:56.633 40036.538 - 40274.851: 99.6546% ( 6) 00:10:56.633 40274.851 - 40513.164: 99.7039% ( 6) 00:10:56.633 40513.164 - 40751.476: 99.7615% ( 7) 00:10:56.633 40751.476 - 40989.789: 99.8026% ( 5) 00:10:56.633 40989.789 - 41228.102: 99.8602% ( 7) 00:10:56.633 41228.102 - 41466.415: 99.9095% ( 6) 00:10:56.633 41466.415 - 41704.727: 99.9589% ( 6) 00:10:56.633 41704.727 - 41943.040: 100.0000% ( 5) 00:10:56.633 00:10:56.633 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:56.633 ============================================================================== 00:10:56.633 Range in us Cumulative IO count 00:10:56.633 8043.055 - 8102.633: 0.1309% ( 16) 00:10:56.633 8102.633 - 8162.211: 0.2781% ( 18) 00:10:56.633 8162.211 - 8221.789: 0.6381% ( 44) 00:10:56.633 8221.789 - 8281.367: 1.1453% ( 62) 00:10:56.633 8281.367 - 8340.945: 1.8815% ( 90) 00:10:56.633 8340.945 - 8400.524: 2.7896% ( 111) 00:10:56.633 8400.524 - 8460.102: 3.8940% ( 135) 00:10:56.633 8460.102 - 8519.680: 5.0884% ( 146) 00:10:56.633 8519.680 - 8579.258: 6.2418% ( 141) 00:10:56.633 8579.258 - 8638.836: 7.5753% ( 163) 00:10:56.633 8638.836 - 8698.415: 8.8678% ( 158) 00:10:56.633 8698.415 - 8757.993: 10.2258% ( 166) 00:10:56.633 8757.993 - 8817.571: 11.6083% ( 169) 00:10:56.633 8817.571 - 8877.149: 12.9908% ( 169) 00:10:56.633 8877.149 - 8936.727: 14.4061% ( 173) 00:10:56.633 8936.727 - 8996.305: 15.8377% ( 175) 00:10:56.633 8996.305 - 9055.884: 17.2448% ( 172) 00:10:56.633 9055.884 - 9115.462: 18.5455% ( 159) 00:10:56.633 9115.462 - 9175.040: 19.7235% ( 144) 00:10:56.633 9175.040 - 9234.618: 20.8688% ( 140) 00:10:56.633 9234.618 - 9294.196: 21.8914% ( 125) 00:10:56.633 9294.196 - 9353.775: 22.8812% ( 121) 00:10:56.633 9353.775 - 9413.353: 23.9529% ( 131) 00:10:56.633 9413.353 - 9472.931: 25.1718% ( 149) 00:10:56.633 9472.931 - 9532.509: 26.6116% ( 176) 00:10:56.633 9532.509 - 9592.087: 28.2068% ( 195) 00:10:56.633 9592.087 - 9651.665: 30.1374% ( 236) 00:10:56.633 9651.665 - 9711.244: 32.3217% ( 267) 00:10:56.633 9711.244 - 9770.822: 34.7104% ( 292) 00:10:56.633 9770.822 - 9830.400: 37.1155% ( 294) 00:10:56.633 9830.400 - 9889.978: 39.4388% ( 284) 00:10:56.633 9889.978 - 9949.556: 41.9503% ( 307) 00:10:56.633 9949.556 - 10009.135: 44.5190% ( 314) 00:10:56.633 10009.135 - 10068.713: 47.3413% ( 345) 00:10:56.633 10068.713 - 10128.291: 50.0000% ( 325) 00:10:56.633 10128.291 - 10187.869: 52.6587% ( 325) 00:10:56.633 10187.869 - 10247.447: 55.4647% ( 343) 00:10:56.633 10247.447 - 10307.025: 58.2543% ( 341) 00:10:56.633 10307.025 - 10366.604: 60.9702% ( 332) 00:10:56.633 10366.604 - 10426.182: 63.6371% ( 326) 00:10:56.633 10426.182 - 10485.760: 66.1895% ( 312) 00:10:56.633 10485.760 - 10545.338: 68.6109% ( 296) 00:10:56.633 10545.338 - 10604.916: 70.8851% ( 278) 00:10:56.633 10604.916 - 10664.495: 72.8730% ( 243) 00:10:56.633 10664.495 - 10724.073: 74.6319% ( 215) 00:10:56.633 10724.073 - 10783.651: 76.0553% ( 174) 00:10:56.633 10783.651 - 10843.229: 77.1760% ( 137) 00:10:56.633 10843.229 - 10902.807: 78.0350% ( 105) 00:10:56.633 10902.807 - 10962.385: 78.7467% ( 87) 00:10:56.633 10962.385 - 11021.964: 79.3685% ( 76) 00:10:56.633 11021.964 - 11081.542: 79.9738% ( 74) 00:10:56.633 11081.542 - 11141.120: 80.4728% ( 61) 00:10:56.633 11141.120 - 11200.698: 80.9719% ( 61) 00:10:56.633 11200.698 - 11260.276: 81.4709% ( 61) 00:10:56.633 11260.276 - 11319.855: 82.0026% ( 65) 00:10:56.633 11319.855 - 11379.433: 82.5998% ( 73) 00:10:56.633 11379.433 - 11439.011: 83.2052% ( 74) 00:10:56.633 11439.011 - 11498.589: 83.8514% ( 79) 00:10:56.633 11498.589 - 11558.167: 84.5223% ( 82) 00:10:56.633 11558.167 - 11617.745: 85.3158% ( 97) 00:10:56.633 11617.745 - 11677.324: 86.0848% ( 94) 00:10:56.633 11677.324 - 11736.902: 86.7719% ( 84) 00:10:56.633 11736.902 - 11796.480: 87.4918% ( 88) 00:10:56.633 11796.480 - 11856.058: 88.2363% ( 91) 00:10:56.633 11856.058 - 11915.636: 88.9480% ( 87) 00:10:56.633 11915.636 - 11975.215: 89.6515% ( 86) 00:10:56.633 11975.215 - 12034.793: 90.3550% ( 86) 00:10:56.633 12034.793 - 12094.371: 91.0259% ( 82) 00:10:56.633 12094.371 - 12153.949: 91.7539% ( 89) 00:10:56.633 12153.949 - 12213.527: 92.4329% ( 83) 00:10:56.633 12213.527 - 12273.105: 93.1201% ( 84) 00:10:56.633 12273.105 - 12332.684: 93.7827% ( 81) 00:10:56.634 12332.684 - 12392.262: 94.4372% ( 80) 00:10:56.634 12392.262 - 12451.840: 95.0753% ( 78) 00:10:56.634 12451.840 - 12511.418: 95.5252% ( 55) 00:10:56.634 12511.418 - 12570.996: 95.9424% ( 51) 00:10:56.634 12570.996 - 12630.575: 96.2860% ( 42) 00:10:56.634 12630.575 - 12690.153: 96.6132% ( 40) 00:10:56.634 12690.153 - 12749.731: 96.8750% ( 32) 00:10:56.634 12749.731 - 12809.309: 97.0795% ( 25) 00:10:56.634 12809.309 - 12868.887: 97.2104% ( 16) 00:10:56.634 12868.887 - 12928.465: 97.3577% ( 18) 00:10:56.634 12928.465 - 12988.044: 97.5049% ( 18) 00:10:56.634 12988.044 - 13047.622: 97.6276% ( 15) 00:10:56.634 13047.622 - 13107.200: 97.7503% ( 15) 00:10:56.634 13107.200 - 13166.778: 97.9058% ( 19) 00:10:56.634 13166.778 - 13226.356: 98.0448% ( 17) 00:10:56.634 13226.356 - 13285.935: 98.1594% ( 14) 00:10:56.634 13285.935 - 13345.513: 98.2657% ( 13) 00:10:56.634 13345.513 - 13405.091: 98.3312% ( 8) 00:10:56.634 13405.091 - 13464.669: 98.4048% ( 9) 00:10:56.634 13464.669 - 13524.247: 98.4620% ( 7) 00:10:56.634 13524.247 - 13583.825: 98.5357% ( 9) 00:10:56.634 13583.825 - 13643.404: 98.5929% ( 7) 00:10:56.634 13643.404 - 13702.982: 98.6502% ( 7) 00:10:56.634 13702.982 - 13762.560: 98.7156% ( 8) 00:10:56.634 13762.560 - 13822.138: 98.7811% ( 8) 00:10:56.634 13822.138 - 13881.716: 98.8465% ( 8) 00:10:56.634 13881.716 - 13941.295: 98.8793% ( 4) 00:10:56.634 13941.295 - 14000.873: 98.9038% ( 3) 00:10:56.634 14000.873 - 14060.451: 98.9365% ( 4) 00:10:56.634 14060.451 - 14120.029: 98.9529% ( 2) 00:10:56.634 23116.335 - 23235.491: 98.9611% ( 1) 00:10:56.634 23235.491 - 23354.647: 98.9774% ( 2) 00:10:56.634 23354.647 - 23473.804: 98.9938% ( 2) 00:10:56.634 23473.804 - 23592.960: 99.0183% ( 3) 00:10:56.634 23592.960 - 23712.116: 99.0429% ( 3) 00:10:56.634 23712.116 - 23831.273: 99.0674% ( 3) 00:10:56.634 23831.273 - 23950.429: 99.0920% ( 3) 00:10:56.634 23950.429 - 24069.585: 99.1165% ( 3) 00:10:56.634 24069.585 - 24188.742: 99.1410% ( 3) 00:10:56.634 24188.742 - 24307.898: 99.1656% ( 3) 00:10:56.634 24307.898 - 24427.055: 99.1819% ( 2) 00:10:56.634 24427.055 - 24546.211: 99.2147% ( 4) 00:10:56.634 24546.211 - 24665.367: 99.2392% ( 3) 00:10:56.634 24665.367 - 24784.524: 99.2556% ( 2) 00:10:56.634 24784.524 - 24903.680: 99.2801% ( 3) 00:10:56.634 24903.680 - 25022.836: 99.3046% ( 3) 00:10:56.634 25022.836 - 25141.993: 99.3292% ( 3) 00:10:56.634 25141.993 - 25261.149: 99.3537% ( 3) 00:10:56.634 25261.149 - 25380.305: 99.3783% ( 3) 00:10:56.634 25380.305 - 25499.462: 99.4028% ( 3) 00:10:56.634 25499.462 - 25618.618: 99.4274% ( 3) 00:10:56.634 25618.618 - 25737.775: 99.4519% ( 3) 00:10:56.634 25737.775 - 25856.931: 99.4683% ( 2) 00:10:56.634 25856.931 - 25976.087: 99.4764% ( 1) 00:10:56.634 30742.342 - 30980.655: 99.4928% ( 2) 00:10:56.634 30980.655 - 31218.967: 99.5337% ( 5) 00:10:56.634 31218.967 - 31457.280: 99.5746% ( 5) 00:10:56.634 31457.280 - 31695.593: 99.6155% ( 5) 00:10:56.634 31695.593 - 31933.905: 99.6646% ( 6) 00:10:56.634 31933.905 - 32172.218: 99.7137% ( 6) 00:10:56.634 32172.218 - 32410.531: 99.7628% ( 6) 00:10:56.634 32410.531 - 32648.844: 99.8037% ( 5) 00:10:56.634 32648.844 - 32887.156: 99.8527% ( 6) 00:10:56.634 32887.156 - 33125.469: 99.9018% ( 6) 00:10:56.634 33125.469 - 33363.782: 99.9509% ( 6) 00:10:56.634 33363.782 - 33602.095: 100.0000% ( 6) 00:10:56.634 00:10:56.634 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:56.634 ============================================================================== 00:10:56.634 Range in us Cumulative IO count 00:10:56.634 7983.476 - 8043.055: 0.0164% ( 2) 00:10:56.634 8043.055 - 8102.633: 0.1063% ( 11) 00:10:56.634 8102.633 - 8162.211: 0.2454% ( 17) 00:10:56.634 8162.211 - 8221.789: 0.5236% ( 34) 00:10:56.634 8221.789 - 8281.367: 1.0308% ( 62) 00:10:56.634 8281.367 - 8340.945: 1.7425% ( 87) 00:10:56.634 8340.945 - 8400.524: 2.7323% ( 121) 00:10:56.634 8400.524 - 8460.102: 3.7795% ( 128) 00:10:56.634 8460.102 - 8519.680: 5.0147% ( 151) 00:10:56.634 8519.680 - 8579.258: 6.2173% ( 147) 00:10:56.634 8579.258 - 8638.836: 7.5507% ( 163) 00:10:56.634 8638.836 - 8698.415: 8.9251% ( 168) 00:10:56.634 8698.415 - 8757.993: 10.2340% ( 160) 00:10:56.634 8757.993 - 8817.571: 11.6410% ( 172) 00:10:56.634 8817.571 - 8877.149: 12.9908% ( 165) 00:10:56.634 8877.149 - 8936.727: 14.3488% ( 166) 00:10:56.634 8936.727 - 8996.305: 15.8377% ( 182) 00:10:56.634 8996.305 - 9055.884: 17.2202% ( 169) 00:10:56.634 9055.884 - 9115.462: 18.6191% ( 171) 00:10:56.634 9115.462 - 9175.040: 19.8707% ( 153) 00:10:56.634 9175.040 - 9234.618: 20.9833% ( 136) 00:10:56.634 9234.618 - 9294.196: 22.0059% ( 125) 00:10:56.634 9294.196 - 9353.775: 22.9467% ( 115) 00:10:56.634 9353.775 - 9413.353: 24.0265% ( 132) 00:10:56.634 9413.353 - 9472.931: 25.1800% ( 141) 00:10:56.634 9472.931 - 9532.509: 26.6361% ( 178) 00:10:56.634 9532.509 - 9592.087: 28.3050% ( 204) 00:10:56.634 9592.087 - 9651.665: 30.2029% ( 232) 00:10:56.634 9651.665 - 9711.244: 32.3462% ( 262) 00:10:56.634 9711.244 - 9770.822: 34.6613% ( 283) 00:10:56.634 9770.822 - 9830.400: 36.9928% ( 285) 00:10:56.634 9830.400 - 9889.978: 39.4306% ( 298) 00:10:56.634 9889.978 - 9949.556: 42.0321% ( 318) 00:10:56.634 9949.556 - 10009.135: 44.5762% ( 311) 00:10:56.634 10009.135 - 10068.713: 47.3004% ( 333) 00:10:56.634 10068.713 - 10128.291: 49.9836% ( 328) 00:10:56.634 10128.291 - 10187.869: 52.7732% ( 341) 00:10:56.634 10187.869 - 10247.447: 55.5137% ( 335) 00:10:56.634 10247.447 - 10307.025: 58.2706% ( 337) 00:10:56.634 10307.025 - 10366.604: 61.1011% ( 346) 00:10:56.634 10366.604 - 10426.182: 63.8253% ( 333) 00:10:56.634 10426.182 - 10485.760: 66.3531% ( 309) 00:10:56.634 10485.760 - 10545.338: 68.8154% ( 301) 00:10:56.634 10545.338 - 10604.916: 71.0488% ( 273) 00:10:56.634 10604.916 - 10664.495: 73.0285% ( 242) 00:10:56.634 10664.495 - 10724.073: 74.6482% ( 198) 00:10:56.634 10724.073 - 10783.651: 75.9571% ( 160) 00:10:56.634 10783.651 - 10843.229: 77.0043% ( 128) 00:10:56.634 10843.229 - 10902.807: 77.8305% ( 101) 00:10:56.634 10902.807 - 10962.385: 78.5504% ( 88) 00:10:56.634 10962.385 - 11021.964: 79.1885% ( 78) 00:10:56.634 11021.964 - 11081.542: 79.7693% ( 71) 00:10:56.634 11081.542 - 11141.120: 80.3829% ( 75) 00:10:56.634 11141.120 - 11200.698: 80.9555% ( 70) 00:10:56.634 11200.698 - 11260.276: 81.4872% ( 65) 00:10:56.634 11260.276 - 11319.855: 82.0926% ( 74) 00:10:56.634 11319.855 - 11379.433: 82.7062% ( 75) 00:10:56.634 11379.433 - 11439.011: 83.3279% ( 76) 00:10:56.634 11439.011 - 11498.589: 83.9823% ( 80) 00:10:56.634 11498.589 - 11558.167: 84.6204% ( 78) 00:10:56.634 11558.167 - 11617.745: 85.3403% ( 88) 00:10:56.634 11617.745 - 11677.324: 86.0275% ( 84) 00:10:56.634 11677.324 - 11736.902: 86.6738% ( 79) 00:10:56.634 11736.902 - 11796.480: 87.4018% ( 89) 00:10:56.634 11796.480 - 11856.058: 88.0645% ( 81) 00:10:56.634 11856.058 - 11915.636: 88.7271% ( 81) 00:10:56.634 11915.636 - 11975.215: 89.4634% ( 90) 00:10:56.634 11975.215 - 12034.793: 90.2078% ( 91) 00:10:56.634 12034.793 - 12094.371: 90.9277% ( 88) 00:10:56.634 12094.371 - 12153.949: 91.6394% ( 87) 00:10:56.634 12153.949 - 12213.527: 92.3020% ( 81) 00:10:56.634 12213.527 - 12273.105: 92.9810% ( 83) 00:10:56.634 12273.105 - 12332.684: 93.6518% ( 82) 00:10:56.634 12332.684 - 12392.262: 94.3226% ( 82) 00:10:56.634 12392.262 - 12451.840: 94.9035% ( 71) 00:10:56.634 12451.840 - 12511.418: 95.4516% ( 67) 00:10:56.634 12511.418 - 12570.996: 95.8851% ( 53) 00:10:56.634 12570.996 - 12630.575: 96.2124% ( 40) 00:10:56.634 12630.575 - 12690.153: 96.5069% ( 36) 00:10:56.634 12690.153 - 12749.731: 96.6950% ( 23) 00:10:56.634 12749.731 - 12809.309: 96.8832% ( 23) 00:10:56.634 12809.309 - 12868.887: 97.0304% ( 18) 00:10:56.634 12868.887 - 12928.465: 97.2022% ( 21) 00:10:56.634 12928.465 - 12988.044: 97.3168% ( 14) 00:10:56.634 12988.044 - 13047.622: 97.4476% ( 16) 00:10:56.634 13047.622 - 13107.200: 97.6031% ( 19) 00:10:56.634 13107.200 - 13166.778: 97.7094% ( 13) 00:10:56.635 13166.778 - 13226.356: 97.8240% ( 14) 00:10:56.635 13226.356 - 13285.935: 97.9467% ( 15) 00:10:56.635 13285.935 - 13345.513: 98.0448% ( 12) 00:10:56.635 13345.513 - 13405.091: 98.1594% ( 14) 00:10:56.635 13405.091 - 13464.669: 98.2739% ( 14) 00:10:56.635 13464.669 - 13524.247: 98.3557% ( 10) 00:10:56.635 13524.247 - 13583.825: 98.4375% ( 10) 00:10:56.635 13583.825 - 13643.404: 98.5029% ( 8) 00:10:56.635 13643.404 - 13702.982: 98.5520% ( 6) 00:10:56.635 13702.982 - 13762.560: 98.6093% ( 7) 00:10:56.635 13762.560 - 13822.138: 98.6502% ( 5) 00:10:56.635 13822.138 - 13881.716: 98.6911% ( 5) 00:10:56.635 13881.716 - 13941.295: 98.7402% ( 6) 00:10:56.635 13941.295 - 14000.873: 98.7893% ( 6) 00:10:56.635 14000.873 - 14060.451: 98.8302% ( 5) 00:10:56.635 14060.451 - 14120.029: 98.8711% ( 5) 00:10:56.635 14120.029 - 14179.607: 98.9202% ( 6) 00:10:56.635 14179.607 - 14239.185: 98.9529% ( 4) 00:10:56.635 20256.582 - 20375.738: 98.9692% ( 2) 00:10:56.635 20375.738 - 20494.895: 98.9856% ( 2) 00:10:56.635 20494.895 - 20614.051: 99.0101% ( 3) 00:10:56.635 20614.051 - 20733.207: 99.0429% ( 4) 00:10:56.635 20733.207 - 20852.364: 99.0674% ( 3) 00:10:56.635 20852.364 - 20971.520: 99.0920% ( 3) 00:10:56.635 20971.520 - 21090.676: 99.1165% ( 3) 00:10:56.635 21090.676 - 21209.833: 99.1329% ( 2) 00:10:56.635 21209.833 - 21328.989: 99.1574% ( 3) 00:10:56.635 21328.989 - 21448.145: 99.1819% ( 3) 00:10:56.635 21448.145 - 21567.302: 99.2065% ( 3) 00:10:56.635 21567.302 - 21686.458: 99.2228% ( 2) 00:10:56.635 21686.458 - 21805.615: 99.2474% ( 3) 00:10:56.635 21805.615 - 21924.771: 99.2719% ( 3) 00:10:56.635 21924.771 - 22043.927: 99.2965% ( 3) 00:10:56.635 22043.927 - 22163.084: 99.3210% ( 3) 00:10:56.635 22163.084 - 22282.240: 99.3455% ( 3) 00:10:56.635 22282.240 - 22401.396: 99.3701% ( 3) 00:10:56.635 22401.396 - 22520.553: 99.3946% ( 3) 00:10:56.635 22520.553 - 22639.709: 99.4192% ( 3) 00:10:56.635 22639.709 - 22758.865: 99.4437% ( 3) 00:10:56.635 22758.865 - 22878.022: 99.4683% ( 3) 00:10:56.635 22878.022 - 22997.178: 99.4764% ( 1) 00:10:56.635 27882.589 - 28001.745: 99.4928% ( 2) 00:10:56.635 28001.745 - 28120.902: 99.5092% ( 2) 00:10:56.635 28120.902 - 28240.058: 99.5337% ( 3) 00:10:56.635 28240.058 - 28359.215: 99.5582% ( 3) 00:10:56.635 28359.215 - 28478.371: 99.5828% ( 3) 00:10:56.635 28478.371 - 28597.527: 99.6073% ( 3) 00:10:56.635 28597.527 - 28716.684: 99.6319% ( 3) 00:10:56.635 28716.684 - 28835.840: 99.6564% ( 3) 00:10:56.635 28835.840 - 28954.996: 99.6810% ( 3) 00:10:56.635 28954.996 - 29074.153: 99.6973% ( 2) 00:10:56.635 29074.153 - 29193.309: 99.7219% ( 3) 00:10:56.635 29193.309 - 29312.465: 99.7464% ( 3) 00:10:56.635 29312.465 - 29431.622: 99.7709% ( 3) 00:10:56.635 29431.622 - 29550.778: 99.7955% ( 3) 00:10:56.635 29550.778 - 29669.935: 99.8200% ( 3) 00:10:56.635 29669.935 - 29789.091: 99.8364% ( 2) 00:10:56.635 29789.091 - 29908.247: 99.8609% ( 3) 00:10:56.635 29908.247 - 30027.404: 99.8855% ( 3) 00:10:56.635 30027.404 - 30146.560: 99.9018% ( 2) 00:10:56.635 30146.560 - 30265.716: 99.9346% ( 4) 00:10:56.635 30265.716 - 30384.873: 99.9509% ( 2) 00:10:56.635 30384.873 - 30504.029: 99.9755% ( 3) 00:10:56.635 30504.029 - 30742.342: 100.0000% ( 3) 00:10:56.635 00:10:56.635 11:35:55 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:58.011 Initializing NVMe Controllers 00:10:58.011 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:58.011 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:58.011 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:58.011 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:58.011 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:58.011 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:58.011 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:58.011 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:58.011 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:58.011 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:58.011 Initialization complete. Launching workers. 00:10:58.011 ======================================================== 00:10:58.011 Latency(us) 00:10:58.011 Device Information : IOPS MiB/s Average min max 00:10:58.011 PCIE (0000:00:10.0) NSID 1 from core 0: 10035.07 117.60 12784.28 9585.87 44762.99 00:10:58.011 PCIE (0000:00:11.0) NSID 1 from core 0: 10035.07 117.60 12751.04 9931.59 41516.72 00:10:58.011 PCIE (0000:00:13.0) NSID 1 from core 0: 10035.07 117.60 12717.49 9828.01 39261.35 00:10:58.011 PCIE (0000:00:12.0) NSID 1 from core 0: 10035.07 117.60 12683.73 9568.99 36204.03 00:10:58.011 PCIE (0000:00:12.0) NSID 2 from core 0: 10035.07 117.60 12649.54 9821.09 33110.34 00:10:58.011 PCIE (0000:00:12.0) NSID 3 from core 0: 10035.07 117.60 12615.17 9797.35 30135.75 00:10:58.011 ======================================================== 00:10:58.011 Total : 60210.45 705.59 12700.21 9568.99 44762.99 00:10:58.011 00:10:58.011 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:58.011 ================================================================================= 00:10:58.011 1.00000% : 10009.135us 00:10:58.011 10.00000% : 10664.495us 00:10:58.011 25.00000% : 11021.964us 00:10:58.011 50.00000% : 11677.324us 00:10:58.011 75.00000% : 12570.996us 00:10:58.011 90.00000% : 15013.702us 00:10:58.011 95.00000% : 20971.520us 00:10:58.011 98.00000% : 26691.025us 00:10:58.011 99.00000% : 34317.033us 00:10:58.011 99.50000% : 42657.978us 00:10:58.011 99.90000% : 44326.167us 00:10:58.011 99.99000% : 44802.793us 00:10:58.011 99.99900% : 44802.793us 00:10:58.011 99.99990% : 44802.793us 00:10:58.011 99.99999% : 44802.793us 00:10:58.011 00:10:58.011 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:58.011 ================================================================================= 00:10:58.011 1.00000% : 10307.025us 00:10:58.011 10.00000% : 10783.651us 00:10:58.011 25.00000% : 11141.120us 00:10:58.011 50.00000% : 11617.745us 00:10:58.011 75.00000% : 12451.840us 00:10:58.011 90.00000% : 14954.124us 00:10:58.011 95.00000% : 20614.051us 00:10:58.011 98.00000% : 27048.495us 00:10:58.011 99.00000% : 32648.844us 00:10:58.011 99.50000% : 39559.913us 00:10:58.011 99.90000% : 41228.102us 00:10:58.011 99.99000% : 41704.727us 00:10:58.011 99.99900% : 41704.727us 00:10:58.011 99.99990% : 41704.727us 00:10:58.011 99.99999% : 41704.727us 00:10:58.011 00:10:58.011 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:58.011 ================================================================================= 00:10:58.011 1.00000% : 10187.869us 00:10:58.011 10.00000% : 10783.651us 00:10:58.011 25.00000% : 11141.120us 00:10:58.011 50.00000% : 11558.167us 00:10:58.011 75.00000% : 12511.418us 00:10:58.011 90.00000% : 14894.545us 00:10:58.011 95.00000% : 20971.520us 00:10:58.011 98.00000% : 26691.025us 00:10:58.011 99.00000% : 30146.560us 00:10:58.011 99.50000% : 37176.785us 00:10:58.011 99.90000% : 39083.287us 00:10:58.011 99.99000% : 39321.600us 00:10:58.011 99.99900% : 39321.600us 00:10:58.011 99.99990% : 39321.600us 00:10:58.011 99.99999% : 39321.600us 00:10:58.011 00:10:58.011 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:58.011 ================================================================================= 00:10:58.011 1.00000% : 10247.447us 00:10:58.011 10.00000% : 10783.651us 00:10:58.011 25.00000% : 11081.542us 00:10:58.011 50.00000% : 11558.167us 00:10:58.011 75.00000% : 12511.418us 00:10:58.011 90.00000% : 14954.124us 00:10:58.011 95.00000% : 21924.771us 00:10:58.011 98.00000% : 26691.025us 00:10:58.011 99.00000% : 28359.215us 00:10:58.011 99.50000% : 34317.033us 00:10:58.011 99.90000% : 35985.222us 00:10:58.011 99.99000% : 36223.535us 00:10:58.011 99.99900% : 36223.535us 00:10:58.011 99.99990% : 36223.535us 00:10:58.011 99.99999% : 36223.535us 00:10:58.011 00:10:58.011 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:58.011 ================================================================================= 00:10:58.011 1.00000% : 10187.869us 00:10:58.011 10.00000% : 10783.651us 00:10:58.011 25.00000% : 11141.120us 00:10:58.011 50.00000% : 11617.745us 00:10:58.011 75.00000% : 12511.418us 00:10:58.011 90.00000% : 14954.124us 00:10:58.011 95.00000% : 22043.927us 00:10:58.011 98.00000% : 24903.680us 00:10:58.011 99.00000% : 27525.120us 00:10:58.011 99.50000% : 31218.967us 00:10:58.011 99.90000% : 32887.156us 00:10:58.011 99.99000% : 33125.469us 00:10:58.011 99.99900% : 33125.469us 00:10:58.011 99.99990% : 33125.469us 00:10:58.011 99.99999% : 33125.469us 00:10:58.011 00:10:58.011 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:58.011 ================================================================================= 00:10:58.011 1.00000% : 10187.869us 00:10:58.011 10.00000% : 10724.073us 00:10:58.011 25.00000% : 11141.120us 00:10:58.011 50.00000% : 11617.745us 00:10:58.011 75.00000% : 12570.996us 00:10:58.011 90.00000% : 14954.124us 00:10:58.011 95.00000% : 21328.989us 00:10:58.011 98.00000% : 24784.524us 00:10:58.011 99.00000% : 27286.807us 00:10:58.011 99.50000% : 28716.684us 00:10:58.011 99.90000% : 29789.091us 00:10:58.011 99.99000% : 30146.560us 00:10:58.011 99.99900% : 30146.560us 00:10:58.011 99.99990% : 30146.560us 00:10:58.011 99.99999% : 30146.560us 00:10:58.011 00:10:58.011 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:58.011 ============================================================================== 00:10:58.011 Range in us Cumulative IO count 00:10:58.011 9532.509 - 9592.087: 0.0100% ( 1) 00:10:58.011 9592.087 - 9651.665: 0.0398% ( 3) 00:10:58.011 9651.665 - 9711.244: 0.0796% ( 4) 00:10:58.011 9711.244 - 9770.822: 0.1493% ( 7) 00:10:58.011 9770.822 - 9830.400: 0.4279% ( 28) 00:10:58.011 9830.400 - 9889.978: 0.4976% ( 7) 00:10:58.011 9889.978 - 9949.556: 0.6170% ( 12) 00:10:58.011 9949.556 - 10009.135: 1.0251% ( 41) 00:10:58.011 10009.135 - 10068.713: 1.1943% ( 17) 00:10:58.011 10068.713 - 10128.291: 1.4928% ( 30) 00:10:58.011 10128.291 - 10187.869: 1.9506% ( 46) 00:10:58.011 10187.869 - 10247.447: 2.5975% ( 65) 00:10:58.011 10247.447 - 10307.025: 3.3838% ( 79) 00:10:58.011 10307.025 - 10366.604: 4.5084% ( 113) 00:10:58.011 10366.604 - 10426.182: 5.5832% ( 108) 00:10:58.011 10426.182 - 10485.760: 6.5884% ( 101) 00:10:58.011 10485.760 - 10545.338: 7.8921% ( 131) 00:10:58.011 10545.338 - 10604.916: 9.1859% ( 130) 00:10:58.011 10604.916 - 10664.495: 11.1266% ( 195) 00:10:58.011 10664.495 - 10724.073: 13.1668% ( 205) 00:10:58.011 10724.073 - 10783.651: 15.6847% ( 253) 00:10:58.011 10783.651 - 10843.229: 17.9837% ( 231) 00:10:58.011 10843.229 - 10902.807: 20.5215% ( 255) 00:10:58.011 10902.807 - 10962.385: 22.9200% ( 241) 00:10:58.011 10962.385 - 11021.964: 25.4180% ( 251) 00:10:58.011 11021.964 - 11081.542: 27.7468% ( 234) 00:10:58.011 11081.542 - 11141.120: 30.4638% ( 273) 00:10:58.011 11141.120 - 11200.698: 33.0812% ( 263) 00:10:58.011 11200.698 - 11260.276: 35.5394% ( 247) 00:10:58.011 11260.276 - 11319.855: 37.9379% ( 241) 00:10:58.011 11319.855 - 11379.433: 40.2170% ( 229) 00:10:58.011 11379.433 - 11439.011: 42.7349% ( 253) 00:10:58.011 11439.011 - 11498.589: 44.9841% ( 226) 00:10:58.011 11498.589 - 11558.167: 47.3527% ( 238) 00:10:58.011 11558.167 - 11617.745: 49.6815% ( 234) 00:10:58.011 11617.745 - 11677.324: 52.0900% ( 242) 00:10:58.011 11677.324 - 11736.902: 54.4686% ( 239) 00:10:58.011 11736.902 - 11796.480: 56.3794% ( 192) 00:10:58.011 11796.480 - 11856.058: 58.3897% ( 202) 00:10:58.011 11856.058 - 11915.636: 60.5494% ( 217) 00:10:58.011 11915.636 - 11975.215: 62.6095% ( 207) 00:10:58.011 11975.215 - 12034.793: 64.3312% ( 173) 00:10:58.011 12034.793 - 12094.371: 65.8041% ( 148) 00:10:58.011 12094.371 - 12153.949: 67.3368% ( 154) 00:10:58.011 12153.949 - 12213.527: 68.7102% ( 138) 00:10:58.011 12213.527 - 12273.105: 69.9741% ( 127) 00:10:58.011 12273.105 - 12332.684: 71.0390% ( 107) 00:10:58.012 12332.684 - 12392.262: 72.1139% ( 108) 00:10:58.012 12392.262 - 12451.840: 73.1290% ( 102) 00:10:58.012 12451.840 - 12511.418: 74.1242% ( 100) 00:10:58.012 12511.418 - 12570.996: 75.1493% ( 103) 00:10:58.012 12570.996 - 12630.575: 76.4630% ( 132) 00:10:58.012 12630.575 - 12690.153: 77.3388% ( 88) 00:10:58.012 12690.153 - 12749.731: 78.0255% ( 69) 00:10:58.012 12749.731 - 12809.309: 78.5529% ( 53) 00:10:58.012 12809.309 - 12868.887: 79.2596% ( 71) 00:10:58.012 12868.887 - 12928.465: 79.7771% ( 52) 00:10:58.012 12928.465 - 12988.044: 80.2846% ( 51) 00:10:58.012 12988.044 - 13047.622: 80.8619% ( 58) 00:10:58.012 13047.622 - 13107.200: 81.3595% ( 50) 00:10:58.012 13107.200 - 13166.778: 81.8272% ( 47) 00:10:58.012 13166.778 - 13226.356: 82.2552% ( 43) 00:10:58.012 13226.356 - 13285.935: 82.5239% ( 27) 00:10:58.012 13285.935 - 13345.513: 82.8225% ( 30) 00:10:58.012 13345.513 - 13405.091: 83.0514% ( 23) 00:10:58.012 13405.091 - 13464.669: 83.3897% ( 34) 00:10:58.012 13464.669 - 13524.247: 83.6286% ( 24) 00:10:58.012 13524.247 - 13583.825: 83.9471% ( 32) 00:10:58.012 13583.825 - 13643.404: 84.1959% ( 25) 00:10:58.012 13643.404 - 13702.982: 84.4845% ( 29) 00:10:58.012 13702.982 - 13762.560: 84.7134% ( 23) 00:10:58.012 13762.560 - 13822.138: 84.8627% ( 15) 00:10:58.012 13822.138 - 13881.716: 85.0617% ( 20) 00:10:58.012 13881.716 - 13941.295: 85.2906% ( 23) 00:10:58.012 13941.295 - 14000.873: 85.6190% ( 33) 00:10:58.012 14000.873 - 14060.451: 85.8479% ( 23) 00:10:58.012 14060.451 - 14120.029: 86.2062% ( 36) 00:10:58.012 14120.029 - 14179.607: 86.4550% ( 25) 00:10:58.012 14179.607 - 14239.185: 86.8631% ( 41) 00:10:58.012 14239.185 - 14298.764: 87.2114% ( 35) 00:10:58.012 14298.764 - 14358.342: 87.5100% ( 30) 00:10:58.012 14358.342 - 14417.920: 87.7687% ( 26) 00:10:58.012 14417.920 - 14477.498: 88.0872% ( 32) 00:10:58.012 14477.498 - 14537.076: 88.3957% ( 31) 00:10:58.012 14537.076 - 14596.655: 88.6246% ( 23) 00:10:58.012 14596.655 - 14656.233: 88.8834% ( 26) 00:10:58.012 14656.233 - 14715.811: 89.0426% ( 16) 00:10:58.012 14715.811 - 14775.389: 89.2416% ( 20) 00:10:58.012 14775.389 - 14834.967: 89.4705% ( 23) 00:10:58.012 14834.967 - 14894.545: 89.6696% ( 20) 00:10:58.012 14894.545 - 14954.124: 89.8686% ( 20) 00:10:58.012 14954.124 - 15013.702: 90.0876% ( 22) 00:10:58.012 15013.702 - 15073.280: 90.2568% ( 17) 00:10:58.012 15073.280 - 15132.858: 90.4857% ( 23) 00:10:58.012 15132.858 - 15192.436: 90.7245% ( 24) 00:10:58.012 15192.436 - 15252.015: 90.9037% ( 18) 00:10:58.012 15252.015 - 15371.171: 91.3615% ( 46) 00:10:58.012 15371.171 - 15490.327: 91.7396% ( 38) 00:10:58.012 15490.327 - 15609.484: 91.9984% ( 26) 00:10:58.012 15609.484 - 15728.640: 92.1576% ( 16) 00:10:58.012 15728.640 - 15847.796: 92.1975% ( 4) 00:10:58.012 15847.796 - 15966.953: 92.2373% ( 4) 00:10:58.012 15966.953 - 16086.109: 92.2572% ( 2) 00:10:58.012 16086.109 - 16205.265: 92.2970% ( 4) 00:10:58.012 16205.265 - 16324.422: 92.3467% ( 5) 00:10:58.012 16324.422 - 16443.578: 92.3567% ( 1) 00:10:58.012 16443.578 - 16562.735: 92.3666% ( 1) 00:10:58.012 16562.735 - 16681.891: 92.3766% ( 1) 00:10:58.012 16681.891 - 16801.047: 92.4264% ( 5) 00:10:58.012 16801.047 - 16920.204: 92.4761% ( 5) 00:10:58.012 16920.204 - 17039.360: 92.5259% ( 5) 00:10:58.012 17039.360 - 17158.516: 92.5657% ( 4) 00:10:58.012 17158.516 - 17277.673: 92.6154% ( 5) 00:10:58.012 17277.673 - 17396.829: 92.6652% ( 5) 00:10:58.012 17396.829 - 17515.985: 92.7150% ( 5) 00:10:58.012 17515.985 - 17635.142: 92.7747% ( 6) 00:10:58.012 17635.142 - 17754.298: 92.8045% ( 3) 00:10:58.012 17754.298 - 17873.455: 92.8344% ( 3) 00:10:58.012 17873.455 - 17992.611: 92.8842% ( 5) 00:10:58.012 17992.611 - 18111.767: 92.9240% ( 4) 00:10:58.012 18111.767 - 18230.924: 92.9837% ( 6) 00:10:58.012 18230.924 - 18350.080: 93.0235% ( 4) 00:10:58.012 18350.080 - 18469.236: 93.0633% ( 4) 00:10:58.012 18469.236 - 18588.393: 93.0832% ( 2) 00:10:58.012 18588.393 - 18707.549: 93.1429% ( 6) 00:10:58.012 18707.549 - 18826.705: 93.2623% ( 12) 00:10:58.012 18826.705 - 18945.862: 93.3917% ( 13) 00:10:58.012 18945.862 - 19065.018: 93.5211% ( 13) 00:10:58.012 19065.018 - 19184.175: 93.6604% ( 14) 00:10:58.012 19184.175 - 19303.331: 93.7699% ( 11) 00:10:58.012 19303.331 - 19422.487: 93.8993% ( 13) 00:10:58.012 19422.487 - 19541.644: 94.0386% ( 14) 00:10:58.012 19541.644 - 19660.800: 94.1481% ( 11) 00:10:58.012 19660.800 - 19779.956: 94.2178% ( 7) 00:10:58.012 19779.956 - 19899.113: 94.3670% ( 15) 00:10:58.012 19899.113 - 20018.269: 94.4765% ( 11) 00:10:58.012 20018.269 - 20137.425: 94.5561% ( 8) 00:10:58.012 20137.425 - 20256.582: 94.6258% ( 7) 00:10:58.012 20256.582 - 20375.738: 94.6656% ( 4) 00:10:58.012 20375.738 - 20494.895: 94.7353% ( 7) 00:10:58.012 20494.895 - 20614.051: 94.7950% ( 6) 00:10:58.012 20614.051 - 20733.207: 94.8846% ( 9) 00:10:58.012 20733.207 - 20852.364: 94.9443% ( 6) 00:10:58.012 20852.364 - 20971.520: 95.0040% ( 6) 00:10:58.012 20971.520 - 21090.676: 95.0736% ( 7) 00:10:58.012 21090.676 - 21209.833: 95.1135% ( 4) 00:10:58.012 21209.833 - 21328.989: 95.1732% ( 6) 00:10:58.012 21328.989 - 21448.145: 95.2329% ( 6) 00:10:58.012 21448.145 - 21567.302: 95.3225% ( 9) 00:10:58.012 21567.302 - 21686.458: 95.4021% ( 8) 00:10:58.012 21686.458 - 21805.615: 95.4916% ( 9) 00:10:58.012 21805.615 - 21924.771: 95.5812% ( 9) 00:10:58.012 21924.771 - 22043.927: 95.7205% ( 14) 00:10:58.012 22043.927 - 22163.084: 95.8599% ( 14) 00:10:58.012 22163.084 - 22282.240: 95.9594% ( 10) 00:10:58.012 22282.240 - 22401.396: 95.9992% ( 4) 00:10:58.012 22401.396 - 22520.553: 96.0390% ( 4) 00:10:58.012 22520.553 - 22639.709: 96.1186% ( 8) 00:10:58.012 22639.709 - 22758.865: 96.1783% ( 6) 00:10:58.012 22758.865 - 22878.022: 96.2480% ( 7) 00:10:58.012 22878.022 - 22997.178: 96.2978% ( 5) 00:10:58.012 22997.178 - 23116.335: 96.3674% ( 7) 00:10:58.012 23116.335 - 23235.491: 96.4769% ( 11) 00:10:58.012 23235.491 - 23354.647: 96.5466% ( 7) 00:10:58.012 23354.647 - 23473.804: 96.5665% ( 2) 00:10:58.012 23473.804 - 23592.960: 96.5963% ( 3) 00:10:58.012 23592.960 - 23712.116: 96.6361% ( 4) 00:10:58.012 23712.116 - 23831.273: 96.6561% ( 2) 00:10:58.012 23831.273 - 23950.429: 96.6959% ( 4) 00:10:58.012 23950.429 - 24069.585: 96.7655% ( 7) 00:10:58.012 24069.585 - 24188.742: 96.8252% ( 6) 00:10:58.012 24188.742 - 24307.898: 96.8750% ( 5) 00:10:58.012 24307.898 - 24427.055: 96.9049% ( 3) 00:10:58.012 24427.055 - 24546.211: 96.9447% ( 4) 00:10:58.012 24546.211 - 24665.367: 96.9944% ( 5) 00:10:58.012 24665.367 - 24784.524: 97.0243% ( 3) 00:10:58.012 24784.524 - 24903.680: 97.0641% ( 4) 00:10:58.012 24903.680 - 25022.836: 97.0740% ( 1) 00:10:58.012 25022.836 - 25141.993: 97.1039% ( 3) 00:10:58.012 25141.993 - 25261.149: 97.1139% ( 1) 00:10:58.012 25261.149 - 25380.305: 97.1835% ( 7) 00:10:58.012 25380.305 - 25499.462: 97.3129% ( 13) 00:10:58.012 25499.462 - 25618.618: 97.4522% ( 14) 00:10:58.012 25618.618 - 25737.775: 97.5617% ( 11) 00:10:58.012 25737.775 - 25856.931: 97.6513% ( 9) 00:10:58.012 25856.931 - 25976.087: 97.7010% ( 5) 00:10:58.012 25976.087 - 26095.244: 97.7508% ( 5) 00:10:58.012 26095.244 - 26214.400: 97.7906% ( 4) 00:10:58.012 26214.400 - 26333.556: 97.8404% ( 5) 00:10:58.012 26333.556 - 26452.713: 97.9498% ( 11) 00:10:58.012 26452.713 - 26571.869: 97.9896% ( 4) 00:10:58.012 26571.869 - 26691.025: 98.0195% ( 3) 00:10:58.012 26691.025 - 26810.182: 98.0593% ( 4) 00:10:58.012 26810.182 - 26929.338: 98.1190% ( 6) 00:10:58.012 26929.338 - 27048.495: 98.1787% ( 6) 00:10:58.012 27048.495 - 27167.651: 98.2584% ( 8) 00:10:58.012 27167.651 - 27286.807: 98.2982% ( 4) 00:10:58.012 27286.807 - 27405.964: 98.3380% ( 4) 00:10:58.012 27405.964 - 27525.120: 98.3778% ( 4) 00:10:58.012 27525.120 - 27644.276: 98.4176% ( 4) 00:10:58.012 27644.276 - 27763.433: 98.4674% ( 5) 00:10:58.012 27763.433 - 27882.589: 98.5072% ( 4) 00:10:58.012 27882.589 - 28001.745: 98.5470% ( 4) 00:10:58.012 28001.745 - 28120.902: 98.6067% ( 6) 00:10:58.012 28120.902 - 28240.058: 98.6465% ( 4) 00:10:58.012 28240.058 - 28359.215: 98.6863% ( 4) 00:10:58.012 28359.215 - 28478.371: 98.7162% ( 3) 00:10:58.012 28478.371 - 28597.527: 98.7261% ( 1) 00:10:58.012 32887.156 - 33125.469: 98.7460% ( 2) 00:10:58.012 33125.469 - 33363.782: 98.7958% ( 5) 00:10:58.012 33363.782 - 33602.095: 98.8455% ( 5) 00:10:58.012 33602.095 - 33840.407: 98.8953% ( 5) 00:10:58.012 33840.407 - 34078.720: 98.9650% ( 7) 00:10:58.012 34078.720 - 34317.033: 99.0247% ( 6) 00:10:58.012 34317.033 - 34555.345: 99.0744% ( 5) 00:10:58.012 34555.345 - 34793.658: 99.1242% ( 5) 00:10:58.013 34793.658 - 35031.971: 99.1740% ( 5) 00:10:58.013 35031.971 - 35270.284: 99.2337% ( 6) 00:10:58.013 35270.284 - 35508.596: 99.2834% ( 5) 00:10:58.013 35508.596 - 35746.909: 99.3531% ( 7) 00:10:58.013 35746.909 - 35985.222: 99.3631% ( 1) 00:10:58.013 41943.040 - 42181.353: 99.4128% ( 5) 00:10:58.013 42181.353 - 42419.665: 99.4725% ( 6) 00:10:58.013 42419.665 - 42657.978: 99.5223% ( 5) 00:10:58.013 42657.978 - 42896.291: 99.5820% ( 6) 00:10:58.013 42896.291 - 43134.604: 99.6318% ( 5) 00:10:58.013 43134.604 - 43372.916: 99.6915% ( 6) 00:10:58.013 43372.916 - 43611.229: 99.7412% ( 5) 00:10:58.013 43611.229 - 43849.542: 99.7910% ( 5) 00:10:58.013 43849.542 - 44087.855: 99.8408% ( 5) 00:10:58.013 44087.855 - 44326.167: 99.9104% ( 7) 00:10:58.013 44326.167 - 44564.480: 99.9701% ( 6) 00:10:58.013 44564.480 - 44802.793: 100.0000% ( 3) 00:10:58.013 00:10:58.013 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:58.013 ============================================================================== 00:10:58.013 Range in us Cumulative IO count 00:10:58.013 9889.978 - 9949.556: 0.0299% ( 3) 00:10:58.013 9949.556 - 10009.135: 0.1095% ( 8) 00:10:58.013 10009.135 - 10068.713: 0.2389% ( 13) 00:10:58.013 10068.713 - 10128.291: 0.3682% ( 13) 00:10:58.013 10128.291 - 10187.869: 0.5872% ( 22) 00:10:58.013 10187.869 - 10247.447: 0.9554% ( 37) 00:10:58.013 10247.447 - 10307.025: 1.5725% ( 62) 00:10:58.013 10307.025 - 10366.604: 2.2890% ( 72) 00:10:58.013 10366.604 - 10426.182: 3.1250% ( 84) 00:10:58.013 10426.182 - 10485.760: 4.1302% ( 101) 00:10:58.013 10485.760 - 10545.338: 5.2150% ( 109) 00:10:58.013 10545.338 - 10604.916: 6.5884% ( 138) 00:10:58.013 10604.916 - 10664.495: 8.1608% ( 158) 00:10:58.013 10664.495 - 10724.073: 9.8129% ( 166) 00:10:58.013 10724.073 - 10783.651: 11.6441% ( 184) 00:10:58.013 10783.651 - 10843.229: 13.7241% ( 209) 00:10:58.013 10843.229 - 10902.807: 15.9733% ( 226) 00:10:58.013 10902.807 - 10962.385: 18.3021% ( 234) 00:10:58.013 10962.385 - 11021.964: 21.4769% ( 319) 00:10:58.013 11021.964 - 11081.542: 24.7910% ( 333) 00:10:58.013 11081.542 - 11141.120: 27.8563% ( 308) 00:10:58.013 11141.120 - 11200.698: 31.0211% ( 318) 00:10:58.013 11200.698 - 11260.276: 34.4944% ( 349) 00:10:58.013 11260.276 - 11319.855: 37.8284% ( 335) 00:10:58.013 11319.855 - 11379.433: 40.8141% ( 300) 00:10:58.013 11379.433 - 11439.011: 43.5908% ( 279) 00:10:58.013 11439.011 - 11498.589: 46.7954% ( 322) 00:10:58.013 11498.589 - 11558.167: 49.5024% ( 272) 00:10:58.013 11558.167 - 11617.745: 52.4283% ( 294) 00:10:58.013 11617.745 - 11677.324: 55.2150% ( 280) 00:10:58.013 11677.324 - 11736.902: 57.7030% ( 250) 00:10:58.013 11736.902 - 11796.480: 59.9920% ( 230) 00:10:58.013 11796.480 - 11856.058: 61.7635% ( 178) 00:10:58.013 11856.058 - 11915.636: 63.4256% ( 167) 00:10:58.013 11915.636 - 11975.215: 65.0279% ( 161) 00:10:58.013 11975.215 - 12034.793: 66.5406% ( 152) 00:10:58.013 12034.793 - 12094.371: 67.9439% ( 141) 00:10:58.013 12094.371 - 12153.949: 69.4068% ( 147) 00:10:58.013 12153.949 - 12213.527: 70.7305% ( 133) 00:10:58.013 12213.527 - 12273.105: 71.7954% ( 107) 00:10:58.013 12273.105 - 12332.684: 73.1489% ( 136) 00:10:58.013 12332.684 - 12392.262: 74.3133% ( 117) 00:10:58.013 12392.262 - 12451.840: 75.1194% ( 81) 00:10:58.013 12451.840 - 12511.418: 75.9952% ( 88) 00:10:58.013 12511.418 - 12570.996: 76.6322% ( 64) 00:10:58.013 12570.996 - 12630.575: 77.1397% ( 51) 00:10:58.013 12630.575 - 12690.153: 77.7269% ( 59) 00:10:58.013 12690.153 - 12749.731: 78.2345% ( 51) 00:10:58.013 12749.731 - 12809.309: 78.8217% ( 59) 00:10:58.013 12809.309 - 12868.887: 79.2695% ( 45) 00:10:58.013 12868.887 - 12928.465: 79.7870% ( 52) 00:10:58.013 12928.465 - 12988.044: 80.3045% ( 52) 00:10:58.013 12988.044 - 13047.622: 80.8221% ( 52) 00:10:58.013 13047.622 - 13107.200: 81.3993% ( 58) 00:10:58.013 13107.200 - 13166.778: 81.9964% ( 60) 00:10:58.013 13166.778 - 13226.356: 82.6831% ( 69) 00:10:58.013 13226.356 - 13285.935: 83.1409% ( 46) 00:10:58.013 13285.935 - 13345.513: 83.5589% ( 42) 00:10:58.013 13345.513 - 13405.091: 83.9172% ( 36) 00:10:58.013 13405.091 - 13464.669: 84.1959% ( 28) 00:10:58.013 13464.669 - 13524.247: 84.4646% ( 27) 00:10:58.013 13524.247 - 13583.825: 84.7034% ( 24) 00:10:58.013 13583.825 - 13643.404: 84.9025% ( 20) 00:10:58.013 13643.404 - 13702.982: 85.1712% ( 27) 00:10:58.013 13702.982 - 13762.560: 85.4399% ( 27) 00:10:58.013 13762.560 - 13822.138: 85.7285% ( 29) 00:10:58.013 13822.138 - 13881.716: 86.0768% ( 35) 00:10:58.013 13881.716 - 13941.295: 86.2361% ( 16) 00:10:58.013 13941.295 - 14000.873: 86.3654% ( 13) 00:10:58.013 14000.873 - 14060.451: 86.4650% ( 10) 00:10:58.013 14060.451 - 14120.029: 86.5545% ( 9) 00:10:58.013 14120.029 - 14179.607: 86.6342% ( 8) 00:10:58.013 14179.607 - 14239.185: 86.7337% ( 10) 00:10:58.013 14239.185 - 14298.764: 86.9029% ( 17) 00:10:58.013 14298.764 - 14358.342: 87.0721% ( 17) 00:10:58.013 14358.342 - 14417.920: 87.2213% ( 15) 00:10:58.013 14417.920 - 14477.498: 87.4801% ( 26) 00:10:58.013 14477.498 - 14537.076: 87.7588% ( 28) 00:10:58.013 14537.076 - 14596.655: 88.0772% ( 32) 00:10:58.013 14596.655 - 14656.233: 88.3658% ( 29) 00:10:58.013 14656.233 - 14715.811: 88.5947% ( 23) 00:10:58.013 14715.811 - 14775.389: 88.9630% ( 37) 00:10:58.013 14775.389 - 14834.967: 89.4208% ( 46) 00:10:58.013 14834.967 - 14894.545: 89.8686% ( 45) 00:10:58.013 14894.545 - 14954.124: 90.3563% ( 49) 00:10:58.013 14954.124 - 15013.702: 90.7345% ( 38) 00:10:58.013 15013.702 - 15073.280: 90.9932% ( 26) 00:10:58.013 15073.280 - 15132.858: 91.2520% ( 26) 00:10:58.013 15132.858 - 15192.436: 91.4809% ( 23) 00:10:58.013 15192.436 - 15252.015: 91.6501% ( 17) 00:10:58.013 15252.015 - 15371.171: 91.8989% ( 25) 00:10:58.013 15371.171 - 15490.327: 92.1676% ( 27) 00:10:58.013 15490.327 - 15609.484: 92.2373% ( 7) 00:10:58.013 15609.484 - 15728.640: 92.2970% ( 6) 00:10:58.013 15728.640 - 15847.796: 92.3467% ( 5) 00:10:58.013 15847.796 - 15966.953: 92.3567% ( 1) 00:10:58.013 17754.298 - 17873.455: 92.3766% ( 2) 00:10:58.013 17873.455 - 17992.611: 92.4363% ( 6) 00:10:58.013 17992.611 - 18111.767: 92.4662% ( 3) 00:10:58.013 18111.767 - 18230.924: 92.5060% ( 4) 00:10:58.013 18230.924 - 18350.080: 92.6154% ( 11) 00:10:58.013 18350.080 - 18469.236: 92.8842% ( 27) 00:10:58.013 18469.236 - 18588.393: 93.0932% ( 21) 00:10:58.013 18588.393 - 18707.549: 93.1927% ( 10) 00:10:58.013 18707.549 - 18826.705: 93.2723% ( 8) 00:10:58.013 18826.705 - 18945.862: 93.3718% ( 10) 00:10:58.013 18945.862 - 19065.018: 93.5111% ( 14) 00:10:58.013 19065.018 - 19184.175: 93.6206% ( 11) 00:10:58.013 19184.175 - 19303.331: 93.7301% ( 11) 00:10:58.013 19303.331 - 19422.487: 93.8296% ( 10) 00:10:58.013 19422.487 - 19541.644: 93.9391% ( 11) 00:10:58.013 19541.644 - 19660.800: 94.0784% ( 14) 00:10:58.013 19660.800 - 19779.956: 94.2277% ( 15) 00:10:58.013 19779.956 - 19899.113: 94.3969% ( 17) 00:10:58.013 19899.113 - 20018.269: 94.5760% ( 18) 00:10:58.013 20018.269 - 20137.425: 94.6955% ( 12) 00:10:58.013 20137.425 - 20256.582: 94.7950% ( 10) 00:10:58.013 20256.582 - 20375.738: 94.8846% ( 9) 00:10:58.013 20375.738 - 20494.895: 94.9542% ( 7) 00:10:58.013 20494.895 - 20614.051: 95.0139% ( 6) 00:10:58.013 20614.051 - 20733.207: 95.0736% ( 6) 00:10:58.013 20733.207 - 20852.364: 95.2030% ( 13) 00:10:58.013 20852.364 - 20971.520: 95.2926% ( 9) 00:10:58.013 20971.520 - 21090.676: 95.3722% ( 8) 00:10:58.013 21090.676 - 21209.833: 95.4618% ( 9) 00:10:58.013 21209.833 - 21328.989: 95.5514% ( 9) 00:10:58.013 21328.989 - 21448.145: 95.6210% ( 7) 00:10:58.013 21448.145 - 21567.302: 95.7106% ( 9) 00:10:58.013 21567.302 - 21686.458: 95.7703% ( 6) 00:10:58.013 21686.458 - 21805.615: 95.8201% ( 5) 00:10:58.013 21805.615 - 21924.771: 95.8400% ( 2) 00:10:58.013 21924.771 - 22043.927: 95.8798% ( 4) 00:10:58.013 22043.927 - 22163.084: 96.1087% ( 23) 00:10:58.013 22163.084 - 22282.240: 96.2381% ( 13) 00:10:58.013 22282.240 - 22401.396: 96.3077% ( 7) 00:10:58.013 22401.396 - 22520.553: 96.3873% ( 8) 00:10:58.013 22520.553 - 22639.709: 96.4471% ( 6) 00:10:58.013 22639.709 - 22758.865: 96.5068% ( 6) 00:10:58.013 22758.865 - 22878.022: 96.5764% ( 7) 00:10:58.013 22878.022 - 22997.178: 96.6361% ( 6) 00:10:58.013 22997.178 - 23116.335: 96.6859% ( 5) 00:10:58.013 23116.335 - 23235.491: 96.7158% ( 3) 00:10:58.013 23235.491 - 23354.647: 96.7257% ( 1) 00:10:58.013 23354.647 - 23473.804: 96.7456% ( 2) 00:10:58.013 23473.804 - 23592.960: 96.7556% ( 1) 00:10:58.013 23592.960 - 23712.116: 96.7655% ( 1) 00:10:58.013 23712.116 - 23831.273: 96.7854% ( 2) 00:10:58.013 23831.273 - 23950.429: 96.8053% ( 2) 00:10:58.013 23950.429 - 24069.585: 96.8153% ( 1) 00:10:58.013 24665.367 - 24784.524: 97.0044% ( 19) 00:10:58.013 24784.524 - 24903.680: 97.0442% ( 4) 00:10:58.013 24903.680 - 25022.836: 97.0641% ( 2) 00:10:58.013 25022.836 - 25141.993: 97.0840% ( 2) 00:10:58.013 25141.993 - 25261.149: 97.0939% ( 1) 00:10:58.013 25261.149 - 25380.305: 97.1139% ( 2) 00:10:58.013 25380.305 - 25499.462: 97.1437% ( 3) 00:10:58.013 25499.462 - 25618.618: 97.1636% ( 2) 00:10:58.013 25618.618 - 25737.775: 97.1835% ( 2) 00:10:58.013 25737.775 - 25856.931: 97.2233% ( 4) 00:10:58.013 25856.931 - 25976.087: 97.2731% ( 5) 00:10:58.013 25976.087 - 26095.244: 97.3229% ( 5) 00:10:58.013 26095.244 - 26214.400: 97.3726% ( 5) 00:10:58.013 26214.400 - 26333.556: 97.4423% ( 7) 00:10:58.014 26333.556 - 26452.713: 97.5318% ( 9) 00:10:58.014 26452.713 - 26571.869: 97.6214% ( 9) 00:10:58.014 26571.869 - 26691.025: 97.7408% ( 12) 00:10:58.014 26691.025 - 26810.182: 97.8503% ( 11) 00:10:58.014 26810.182 - 26929.338: 97.9598% ( 11) 00:10:58.014 26929.338 - 27048.495: 98.0693% ( 11) 00:10:58.014 27048.495 - 27167.651: 98.1588% ( 9) 00:10:58.014 27167.651 - 27286.807: 98.2683% ( 11) 00:10:58.014 27286.807 - 27405.964: 98.4475% ( 18) 00:10:58.014 27405.964 - 27525.120: 98.5072% ( 6) 00:10:58.014 27525.120 - 27644.276: 98.5669% ( 6) 00:10:58.014 27644.276 - 27763.433: 98.5768% ( 1) 00:10:58.014 27763.433 - 27882.589: 98.5967% ( 2) 00:10:58.014 27882.589 - 28001.745: 98.6067% ( 1) 00:10:58.014 28001.745 - 28120.902: 98.6266% ( 2) 00:10:58.014 28120.902 - 28240.058: 98.6465% ( 2) 00:10:58.014 28240.058 - 28359.215: 98.6564% ( 1) 00:10:58.014 28359.215 - 28478.371: 98.6764% ( 2) 00:10:58.014 28478.371 - 28597.527: 98.6963% ( 2) 00:10:58.014 28597.527 - 28716.684: 98.7062% ( 1) 00:10:58.014 28716.684 - 28835.840: 98.7261% ( 2) 00:10:58.014 31457.280 - 31695.593: 98.7361% ( 1) 00:10:58.014 32172.218 - 32410.531: 98.9650% ( 23) 00:10:58.014 32410.531 - 32648.844: 99.0147% ( 5) 00:10:58.014 32648.844 - 32887.156: 99.0744% ( 6) 00:10:58.014 32887.156 - 33125.469: 99.1242% ( 5) 00:10:58.014 33125.469 - 33363.782: 99.1740% ( 5) 00:10:58.014 33363.782 - 33602.095: 99.2337% ( 6) 00:10:58.014 33602.095 - 33840.407: 99.2834% ( 5) 00:10:58.014 33840.407 - 34078.720: 99.3332% ( 5) 00:10:58.014 34078.720 - 34317.033: 99.3631% ( 3) 00:10:58.014 37176.785 - 37415.098: 99.4427% ( 8) 00:10:58.014 39083.287 - 39321.600: 99.4626% ( 2) 00:10:58.014 39321.600 - 39559.913: 99.5223% ( 6) 00:10:58.014 39559.913 - 39798.225: 99.5721% ( 5) 00:10:58.014 39798.225 - 40036.538: 99.6318% ( 6) 00:10:58.014 40036.538 - 40274.851: 99.6915% ( 6) 00:10:58.014 40274.851 - 40513.164: 99.7512% ( 6) 00:10:58.014 40513.164 - 40751.476: 99.8109% ( 6) 00:10:58.014 40751.476 - 40989.789: 99.8706% ( 6) 00:10:58.014 40989.789 - 41228.102: 99.9204% ( 5) 00:10:58.014 41228.102 - 41466.415: 99.9801% ( 6) 00:10:58.014 41466.415 - 41704.727: 100.0000% ( 2) 00:10:58.014 00:10:58.014 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:58.014 ============================================================================== 00:10:58.014 Range in us Cumulative IO count 00:10:58.014 9770.822 - 9830.400: 0.0100% ( 1) 00:10:58.014 9830.400 - 9889.978: 0.0796% ( 7) 00:10:58.014 9889.978 - 9949.556: 0.1294% ( 5) 00:10:58.014 9949.556 - 10009.135: 0.1990% ( 7) 00:10:58.014 10009.135 - 10068.713: 0.2886% ( 9) 00:10:58.014 10068.713 - 10128.291: 0.6369% ( 35) 00:10:58.014 10128.291 - 10187.869: 1.0052% ( 37) 00:10:58.014 10187.869 - 10247.447: 1.4729% ( 47) 00:10:58.014 10247.447 - 10307.025: 2.0502% ( 58) 00:10:58.014 10307.025 - 10366.604: 2.5876% ( 54) 00:10:58.014 10366.604 - 10426.182: 3.2643% ( 68) 00:10:58.014 10426.182 - 10485.760: 4.1003% ( 84) 00:10:58.014 10485.760 - 10545.338: 5.0259% ( 93) 00:10:58.014 10545.338 - 10604.916: 6.3197% ( 130) 00:10:58.014 10604.916 - 10664.495: 7.7627% ( 145) 00:10:58.014 10664.495 - 10724.073: 9.5243% ( 177) 00:10:58.014 10724.073 - 10783.651: 11.1365% ( 162) 00:10:58.014 10783.651 - 10843.229: 12.8782% ( 175) 00:10:58.014 10843.229 - 10902.807: 14.8885% ( 202) 00:10:58.014 10902.807 - 10962.385: 17.5358% ( 266) 00:10:58.014 10962.385 - 11021.964: 20.8300% ( 331) 00:10:58.014 11021.964 - 11081.542: 24.1441% ( 333) 00:10:58.014 11081.542 - 11141.120: 27.6075% ( 348) 00:10:58.014 11141.120 - 11200.698: 31.3495% ( 376) 00:10:58.014 11200.698 - 11260.276: 34.9124% ( 358) 00:10:58.014 11260.276 - 11319.855: 38.5251% ( 363) 00:10:58.014 11319.855 - 11379.433: 42.1576% ( 365) 00:10:58.014 11379.433 - 11439.011: 45.2727% ( 313) 00:10:58.014 11439.011 - 11498.589: 47.9598% ( 270) 00:10:58.014 11498.589 - 11558.167: 50.6469% ( 270) 00:10:58.014 11558.167 - 11617.745: 53.2942% ( 266) 00:10:58.014 11617.745 - 11677.324: 55.5932% ( 231) 00:10:58.014 11677.324 - 11736.902: 58.1310% ( 255) 00:10:58.014 11736.902 - 11796.480: 60.3404% ( 222) 00:10:58.014 11796.480 - 11856.058: 61.9825% ( 165) 00:10:58.014 11856.058 - 11915.636: 63.6445% ( 167) 00:10:58.014 11915.636 - 11975.215: 65.1174% ( 148) 00:10:58.014 11975.215 - 12034.793: 66.5406% ( 143) 00:10:58.014 12034.793 - 12094.371: 68.2822% ( 175) 00:10:58.014 12094.371 - 12153.949: 69.7253% ( 145) 00:10:58.014 12153.949 - 12213.527: 70.8002% ( 108) 00:10:58.014 12213.527 - 12273.105: 71.8650% ( 107) 00:10:58.014 12273.105 - 12332.684: 72.7807% ( 92) 00:10:58.014 12332.684 - 12392.262: 73.6863% ( 91) 00:10:58.014 12392.262 - 12451.840: 74.5521% ( 87) 00:10:58.014 12451.840 - 12511.418: 75.3085% ( 76) 00:10:58.014 12511.418 - 12570.996: 75.9256% ( 62) 00:10:58.014 12570.996 - 12630.575: 76.5227% ( 60) 00:10:58.014 12630.575 - 12690.153: 77.2293% ( 71) 00:10:58.014 12690.153 - 12749.731: 78.2245% ( 100) 00:10:58.014 12749.731 - 12809.309: 78.9411% ( 72) 00:10:58.014 12809.309 - 12868.887: 79.5681% ( 63) 00:10:58.014 12868.887 - 12928.465: 80.2150% ( 65) 00:10:58.014 12928.465 - 12988.044: 80.8021% ( 59) 00:10:58.014 12988.044 - 13047.622: 81.3694% ( 57) 00:10:58.014 13047.622 - 13107.200: 81.8471% ( 48) 00:10:58.014 13107.200 - 13166.778: 82.2850% ( 44) 00:10:58.014 13166.778 - 13226.356: 82.7727% ( 49) 00:10:58.014 13226.356 - 13285.935: 83.1409% ( 37) 00:10:58.014 13285.935 - 13345.513: 83.3499% ( 21) 00:10:58.014 13345.513 - 13405.091: 83.6385% ( 29) 00:10:58.014 13405.091 - 13464.669: 83.8475% ( 21) 00:10:58.014 13464.669 - 13524.247: 84.1063% ( 26) 00:10:58.014 13524.247 - 13583.825: 84.4148% ( 31) 00:10:58.014 13583.825 - 13643.404: 84.7134% ( 30) 00:10:58.014 13643.404 - 13702.982: 85.0418% ( 33) 00:10:58.014 13702.982 - 13762.560: 85.3603% ( 32) 00:10:58.014 13762.560 - 13822.138: 85.6688% ( 31) 00:10:58.014 13822.138 - 13881.716: 85.9873% ( 32) 00:10:58.014 13881.716 - 13941.295: 86.2659% ( 28) 00:10:58.014 13941.295 - 14000.873: 86.4351% ( 17) 00:10:58.014 14000.873 - 14060.451: 86.6441% ( 21) 00:10:58.014 14060.451 - 14120.029: 86.8332% ( 19) 00:10:58.014 14120.029 - 14179.607: 86.9128% ( 8) 00:10:58.014 14179.607 - 14239.185: 87.0422% ( 13) 00:10:58.014 14239.185 - 14298.764: 87.1716% ( 13) 00:10:58.014 14298.764 - 14358.342: 87.3706% ( 20) 00:10:58.014 14358.342 - 14417.920: 87.5896% ( 22) 00:10:58.014 14417.920 - 14477.498: 87.7687% ( 18) 00:10:58.014 14477.498 - 14537.076: 87.9877% ( 22) 00:10:58.014 14537.076 - 14596.655: 88.3360% ( 35) 00:10:58.014 14596.655 - 14656.233: 88.6346% ( 30) 00:10:58.014 14656.233 - 14715.811: 89.0227% ( 39) 00:10:58.014 14715.811 - 14775.389: 89.3611% ( 34) 00:10:58.014 14775.389 - 14834.967: 89.7293% ( 37) 00:10:58.014 14834.967 - 14894.545: 90.1473% ( 42) 00:10:58.014 14894.545 - 14954.124: 90.5056% ( 36) 00:10:58.014 14954.124 - 15013.702: 90.7544% ( 25) 00:10:58.014 15013.702 - 15073.280: 90.9833% ( 23) 00:10:58.014 15073.280 - 15132.858: 91.1724% ( 19) 00:10:58.014 15132.858 - 15192.436: 91.3615% ( 19) 00:10:58.014 15192.436 - 15252.015: 91.5506% ( 19) 00:10:58.014 15252.015 - 15371.171: 91.8989% ( 35) 00:10:58.014 15371.171 - 15490.327: 92.1278% ( 23) 00:10:58.014 15490.327 - 15609.484: 92.2472% ( 12) 00:10:58.014 15609.484 - 15728.640: 92.3069% ( 6) 00:10:58.014 15728.640 - 15847.796: 92.3567% ( 5) 00:10:58.014 17158.516 - 17277.673: 92.3965% ( 4) 00:10:58.014 17277.673 - 17396.829: 92.4463% ( 5) 00:10:58.014 17396.829 - 17515.985: 92.5060% ( 6) 00:10:58.014 17515.985 - 17635.142: 92.6851% ( 18) 00:10:58.014 17635.142 - 17754.298: 92.7150% ( 3) 00:10:58.014 17754.298 - 17873.455: 92.8742% ( 16) 00:10:58.014 17873.455 - 17992.611: 93.0036% ( 13) 00:10:58.014 17992.611 - 18111.767: 93.0334% ( 3) 00:10:58.014 18111.767 - 18230.924: 93.0732% ( 4) 00:10:58.014 18230.924 - 18350.080: 93.1230% ( 5) 00:10:58.014 18350.080 - 18469.236: 93.1728% ( 5) 00:10:58.014 18469.236 - 18588.393: 93.2225% ( 5) 00:10:58.014 18588.393 - 18707.549: 93.2822% ( 6) 00:10:58.014 18707.549 - 18826.705: 93.3619% ( 8) 00:10:58.014 18826.705 - 18945.862: 93.4415% ( 8) 00:10:58.014 18945.862 - 19065.018: 93.5012% ( 6) 00:10:58.014 19065.018 - 19184.175: 93.5808% ( 8) 00:10:58.014 19184.175 - 19303.331: 93.6505% ( 7) 00:10:58.014 19303.331 - 19422.487: 93.7102% ( 6) 00:10:58.014 19422.487 - 19541.644: 93.7898% ( 8) 00:10:58.014 19541.644 - 19660.800: 93.8694% ( 8) 00:10:58.014 19660.800 - 19779.956: 93.9590% ( 9) 00:10:58.014 19779.956 - 19899.113: 94.0287% ( 7) 00:10:58.014 19899.113 - 20018.269: 94.0983% ( 7) 00:10:58.014 20018.269 - 20137.425: 94.1979% ( 10) 00:10:58.014 20137.425 - 20256.582: 94.2974% ( 10) 00:10:58.014 20256.582 - 20375.738: 94.3869% ( 9) 00:10:58.014 20375.738 - 20494.895: 94.5561% ( 17) 00:10:58.014 20494.895 - 20614.051: 94.6855% ( 13) 00:10:58.014 20614.051 - 20733.207: 94.7751% ( 9) 00:10:58.014 20733.207 - 20852.364: 94.8945% ( 12) 00:10:58.014 20852.364 - 20971.520: 95.0139% ( 12) 00:10:58.014 20971.520 - 21090.676: 95.1234% ( 11) 00:10:58.015 21090.676 - 21209.833: 95.2428% ( 12) 00:10:58.015 21209.833 - 21328.989: 95.3623% ( 12) 00:10:58.015 21328.989 - 21448.145: 95.4220% ( 6) 00:10:58.015 21448.145 - 21567.302: 95.4916% ( 7) 00:10:58.015 21567.302 - 21686.458: 95.5713% ( 8) 00:10:58.015 21686.458 - 21805.615: 95.6210% ( 5) 00:10:58.015 21805.615 - 21924.771: 95.6907% ( 7) 00:10:58.015 21924.771 - 22043.927: 95.7703% ( 8) 00:10:58.015 22043.927 - 22163.084: 95.8201% ( 5) 00:10:58.015 22163.084 - 22282.240: 95.9494% ( 13) 00:10:58.015 22282.240 - 22401.396: 96.2281% ( 28) 00:10:58.015 22401.396 - 22520.553: 96.3077% ( 8) 00:10:58.015 22520.553 - 22639.709: 96.3873% ( 8) 00:10:58.015 22639.709 - 22758.865: 96.4670% ( 8) 00:10:58.015 22758.865 - 22878.022: 96.5366% ( 7) 00:10:58.015 22878.022 - 22997.178: 96.6162% ( 8) 00:10:58.015 22997.178 - 23116.335: 96.6859% ( 7) 00:10:58.015 23116.335 - 23235.491: 96.7456% ( 6) 00:10:58.015 23235.491 - 23354.647: 96.7854% ( 4) 00:10:58.015 23354.647 - 23473.804: 96.7954% ( 1) 00:10:58.015 23473.804 - 23592.960: 96.8153% ( 2) 00:10:58.015 24307.898 - 24427.055: 96.8252% ( 1) 00:10:58.015 24427.055 - 24546.211: 96.9447% ( 12) 00:10:58.015 24546.211 - 24665.367: 97.0740% ( 13) 00:10:58.015 24665.367 - 24784.524: 97.1338% ( 6) 00:10:58.015 24784.524 - 24903.680: 97.1437% ( 1) 00:10:58.015 24903.680 - 25022.836: 97.1636% ( 2) 00:10:58.015 25022.836 - 25141.993: 97.1835% ( 2) 00:10:58.015 25141.993 - 25261.149: 97.2034% ( 2) 00:10:58.015 25261.149 - 25380.305: 97.2233% ( 2) 00:10:58.015 25380.305 - 25499.462: 97.2432% ( 2) 00:10:58.015 25499.462 - 25618.618: 97.2631% ( 2) 00:10:58.015 25618.618 - 25737.775: 97.2930% ( 3) 00:10:58.015 25737.775 - 25856.931: 97.3129% ( 2) 00:10:58.015 25856.931 - 25976.087: 97.3328% ( 2) 00:10:58.015 25976.087 - 26095.244: 97.4124% ( 8) 00:10:58.015 26095.244 - 26214.400: 97.5318% ( 12) 00:10:58.015 26214.400 - 26333.556: 97.6413% ( 11) 00:10:58.015 26333.556 - 26452.713: 97.7607% ( 12) 00:10:58.015 26452.713 - 26571.869: 97.8802% ( 12) 00:10:58.015 26571.869 - 26691.025: 98.0295% ( 15) 00:10:58.015 26691.025 - 26810.182: 98.1688% ( 14) 00:10:58.015 26810.182 - 26929.338: 98.2484% ( 8) 00:10:58.015 26929.338 - 27048.495: 98.3380% ( 9) 00:10:58.015 27048.495 - 27167.651: 98.3778% ( 4) 00:10:58.015 27167.651 - 27286.807: 98.4076% ( 3) 00:10:58.015 27286.807 - 27405.964: 98.4375% ( 3) 00:10:58.015 27405.964 - 27525.120: 98.4674% ( 3) 00:10:58.015 27525.120 - 27644.276: 98.4773% ( 1) 00:10:58.015 27644.276 - 27763.433: 98.5072% ( 3) 00:10:58.015 27763.433 - 27882.589: 98.5171% ( 1) 00:10:58.015 27882.589 - 28001.745: 98.5271% ( 1) 00:10:58.015 28001.745 - 28120.902: 98.5370% ( 1) 00:10:58.015 28120.902 - 28240.058: 98.5569% ( 2) 00:10:58.015 28240.058 - 28359.215: 98.5669% ( 1) 00:10:58.015 28359.215 - 28478.371: 98.5768% ( 1) 00:10:58.015 28478.371 - 28597.527: 98.5967% ( 2) 00:10:58.015 28597.527 - 28716.684: 98.6067% ( 1) 00:10:58.015 28716.684 - 28835.840: 98.6266% ( 2) 00:10:58.015 28835.840 - 28954.996: 98.6365% ( 1) 00:10:58.015 28954.996 - 29074.153: 98.6465% ( 1) 00:10:58.015 29074.153 - 29193.309: 98.6564% ( 1) 00:10:58.015 29193.309 - 29312.465: 98.6664% ( 1) 00:10:58.015 29312.465 - 29431.622: 98.6863% ( 2) 00:10:58.015 29431.622 - 29550.778: 98.6963% ( 1) 00:10:58.015 29550.778 - 29669.935: 98.7659% ( 7) 00:10:58.015 29669.935 - 29789.091: 98.9451% ( 18) 00:10:58.015 29789.091 - 29908.247: 98.9650% ( 2) 00:10:58.015 29908.247 - 30027.404: 98.9849% ( 2) 00:10:58.015 30027.404 - 30146.560: 99.0147% ( 3) 00:10:58.015 30146.560 - 30265.716: 99.0446% ( 3) 00:10:58.015 30265.716 - 30384.873: 99.0645% ( 2) 00:10:58.015 30384.873 - 30504.029: 99.0943% ( 3) 00:10:58.015 30504.029 - 30742.342: 99.1541% ( 6) 00:10:58.015 30742.342 - 30980.655: 99.2038% ( 5) 00:10:58.015 30980.655 - 31218.967: 99.2635% ( 6) 00:10:58.015 31218.967 - 31457.280: 99.3033% ( 4) 00:10:58.015 31457.280 - 31695.593: 99.3631% ( 6) 00:10:58.015 36461.847 - 36700.160: 99.3830% ( 2) 00:10:58.015 36700.160 - 36938.473: 99.4427% ( 6) 00:10:58.015 36938.473 - 37176.785: 99.5024% ( 6) 00:10:58.015 37176.785 - 37415.098: 99.5621% ( 6) 00:10:58.015 37415.098 - 37653.411: 99.6119% ( 5) 00:10:58.015 37653.411 - 37891.724: 99.6716% ( 6) 00:10:58.015 37891.724 - 38130.036: 99.7313% ( 6) 00:10:58.015 38130.036 - 38368.349: 99.7811% ( 5) 00:10:58.015 38368.349 - 38606.662: 99.8408% ( 6) 00:10:58.015 38606.662 - 38844.975: 99.8905% ( 5) 00:10:58.015 38844.975 - 39083.287: 99.9502% ( 6) 00:10:58.015 39083.287 - 39321.600: 100.0000% ( 5) 00:10:58.015 00:10:58.015 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:58.015 ============================================================================== 00:10:58.015 Range in us Cumulative IO count 00:10:58.015 9532.509 - 9592.087: 0.0100% ( 1) 00:10:58.015 9770.822 - 9830.400: 0.0398% ( 3) 00:10:58.015 9830.400 - 9889.978: 0.0995% ( 6) 00:10:58.015 9889.978 - 9949.556: 0.1493% ( 5) 00:10:58.015 9949.556 - 10009.135: 0.2986% ( 15) 00:10:58.015 10009.135 - 10068.713: 0.5474% ( 25) 00:10:58.015 10068.713 - 10128.291: 0.7564% ( 21) 00:10:58.015 10128.291 - 10187.869: 0.9853% ( 23) 00:10:58.015 10187.869 - 10247.447: 1.4928% ( 51) 00:10:58.015 10247.447 - 10307.025: 2.0104% ( 52) 00:10:58.015 10307.025 - 10366.604: 2.5876% ( 58) 00:10:58.015 10366.604 - 10426.182: 3.3340% ( 75) 00:10:58.015 10426.182 - 10485.760: 4.5183% ( 119) 00:10:58.015 10485.760 - 10545.338: 5.7524% ( 124) 00:10:58.015 10545.338 - 10604.916: 6.9268% ( 118) 00:10:58.015 10604.916 - 10664.495: 8.1210% ( 120) 00:10:58.015 10664.495 - 10724.073: 9.6736% ( 156) 00:10:58.015 10724.073 - 10783.651: 11.4252% ( 176) 00:10:58.015 10783.651 - 10843.229: 13.6246% ( 221) 00:10:58.015 10843.229 - 10902.807: 16.0729% ( 246) 00:10:58.015 10902.807 - 10962.385: 18.8595% ( 280) 00:10:58.015 10962.385 - 11021.964: 22.1437% ( 330) 00:10:58.015 11021.964 - 11081.542: 25.5474% ( 342) 00:10:58.015 11081.542 - 11141.120: 28.5928% ( 306) 00:10:58.015 11141.120 - 11200.698: 31.8969% ( 332) 00:10:58.015 11200.698 - 11260.276: 34.8129% ( 293) 00:10:58.015 11260.276 - 11319.855: 38.1170% ( 332) 00:10:58.015 11319.855 - 11379.433: 41.4510% ( 335) 00:10:58.015 11379.433 - 11439.011: 44.3869% ( 295) 00:10:58.015 11439.011 - 11498.589: 47.3726% ( 300) 00:10:58.015 11498.589 - 11558.167: 50.0896% ( 273) 00:10:58.015 11558.167 - 11617.745: 52.7866% ( 271) 00:10:58.015 11617.745 - 11677.324: 55.1752% ( 240) 00:10:58.015 11677.324 - 11736.902: 57.3746% ( 221) 00:10:58.015 11736.902 - 11796.480: 59.4248% ( 206) 00:10:58.015 11796.480 - 11856.058: 61.4849% ( 207) 00:10:58.015 11856.058 - 11915.636: 63.3459% ( 187) 00:10:58.015 11915.636 - 11975.215: 65.3861% ( 205) 00:10:58.015 11975.215 - 12034.793: 66.8790% ( 150) 00:10:58.015 12034.793 - 12094.371: 68.2126% ( 134) 00:10:58.015 12094.371 - 12153.949: 69.2576% ( 105) 00:10:58.015 12153.949 - 12213.527: 70.5613% ( 131) 00:10:58.015 12213.527 - 12273.105: 71.8352% ( 128) 00:10:58.015 12273.105 - 12332.684: 72.7309% ( 90) 00:10:58.015 12332.684 - 12392.262: 73.5868% ( 86) 00:10:58.015 12392.262 - 12451.840: 74.4029% ( 82) 00:10:58.015 12451.840 - 12511.418: 75.0398% ( 64) 00:10:58.015 12511.418 - 12570.996: 75.7166% ( 68) 00:10:58.015 12570.996 - 12630.575: 76.2440% ( 53) 00:10:58.015 12630.575 - 12690.153: 76.8810% ( 64) 00:10:58.015 12690.153 - 12749.731: 77.4980% ( 62) 00:10:58.015 12749.731 - 12809.309: 78.2146% ( 72) 00:10:58.015 12809.309 - 12868.887: 79.0506% ( 84) 00:10:58.015 12868.887 - 12928.465: 79.7472% ( 70) 00:10:58.015 12928.465 - 12988.044: 80.3443% ( 60) 00:10:58.015 12988.044 - 13047.622: 81.1107% ( 77) 00:10:58.015 13047.622 - 13107.200: 81.7178% ( 61) 00:10:58.015 13107.200 - 13166.778: 82.2154% ( 50) 00:10:58.015 13166.778 - 13226.356: 82.6632% ( 45) 00:10:58.015 13226.356 - 13285.935: 83.1111% ( 45) 00:10:58.015 13285.935 - 13345.513: 83.4793% ( 37) 00:10:58.015 13345.513 - 13405.091: 83.7679% ( 29) 00:10:58.015 13405.091 - 13464.669: 84.0068% ( 24) 00:10:58.015 13464.669 - 13524.247: 84.2257% ( 22) 00:10:58.015 13524.247 - 13583.825: 84.5143% ( 29) 00:10:58.015 13583.825 - 13643.404: 84.8129% ( 30) 00:10:58.015 13643.404 - 13702.982: 85.0418% ( 23) 00:10:58.015 13702.982 - 13762.560: 85.2408% ( 20) 00:10:58.015 13762.560 - 13822.138: 85.4399% ( 20) 00:10:58.015 13822.138 - 13881.716: 85.6588% ( 22) 00:10:58.015 13881.716 - 13941.295: 85.8877% ( 23) 00:10:58.015 13941.295 - 14000.873: 86.1664% ( 28) 00:10:58.015 14000.873 - 14060.451: 86.3953% ( 23) 00:10:58.015 14060.451 - 14120.029: 86.5744% ( 18) 00:10:58.015 14120.029 - 14179.607: 86.7237% ( 15) 00:10:58.015 14179.607 - 14239.185: 86.9526% ( 23) 00:10:58.015 14239.185 - 14298.764: 87.2014% ( 25) 00:10:58.015 14298.764 - 14358.342: 87.4303% ( 23) 00:10:58.015 14358.342 - 14417.920: 87.5896% ( 16) 00:10:58.015 14417.920 - 14477.498: 87.7986% ( 21) 00:10:58.015 14477.498 - 14537.076: 88.0474% ( 25) 00:10:58.015 14537.076 - 14596.655: 88.2763% ( 23) 00:10:58.015 14596.655 - 14656.233: 88.5947% ( 32) 00:10:58.015 14656.233 - 14715.811: 88.8535% ( 26) 00:10:58.015 14715.811 - 14775.389: 89.1819% ( 33) 00:10:58.015 14775.389 - 14834.967: 89.5502% ( 37) 00:10:58.015 14834.967 - 14894.545: 89.9184% ( 37) 00:10:58.015 14894.545 - 14954.124: 90.2070% ( 29) 00:10:58.015 14954.124 - 15013.702: 90.4359% ( 23) 00:10:58.015 15013.702 - 15073.280: 90.7146% ( 28) 00:10:58.015 15073.280 - 15132.858: 90.9634% ( 25) 00:10:58.015 15132.858 - 15192.436: 91.1624% ( 20) 00:10:58.016 15192.436 - 15252.015: 91.3615% ( 20) 00:10:58.016 15252.015 - 15371.171: 91.6799% ( 32) 00:10:58.016 15371.171 - 15490.327: 91.9984% ( 32) 00:10:58.016 15490.327 - 15609.484: 92.2174% ( 22) 00:10:58.016 15609.484 - 15728.640: 92.3169% ( 10) 00:10:58.016 15728.640 - 15847.796: 92.3567% ( 4) 00:10:58.016 16086.109 - 16205.265: 92.3766% ( 2) 00:10:58.016 16205.265 - 16324.422: 92.4264% ( 5) 00:10:58.016 16324.422 - 16443.578: 92.6254% ( 20) 00:10:58.016 16443.578 - 16562.735: 92.7050% ( 8) 00:10:58.016 16562.735 - 16681.891: 92.8344% ( 13) 00:10:58.016 16681.891 - 16801.047: 92.8941% ( 6) 00:10:58.016 16801.047 - 16920.204: 92.9439% ( 5) 00:10:58.016 16920.204 - 17039.360: 93.0235% ( 8) 00:10:58.016 17039.360 - 17158.516: 93.0932% ( 7) 00:10:58.016 17158.516 - 17277.673: 93.1429% ( 5) 00:10:58.016 17277.673 - 17396.829: 93.1927% ( 5) 00:10:58.016 17396.829 - 17515.985: 93.2325% ( 4) 00:10:58.016 17515.985 - 17635.142: 93.2723% ( 4) 00:10:58.016 17635.142 - 17754.298: 93.3221% ( 5) 00:10:58.016 17754.298 - 17873.455: 93.3619% ( 4) 00:10:58.016 17873.455 - 17992.611: 93.4116% ( 5) 00:10:58.016 17992.611 - 18111.767: 93.4514% ( 4) 00:10:58.016 18111.767 - 18230.924: 93.4912% ( 4) 00:10:58.016 18230.924 - 18350.080: 93.5111% ( 2) 00:10:58.016 18350.080 - 18469.236: 93.5211% ( 1) 00:10:58.016 18469.236 - 18588.393: 93.5311% ( 1) 00:10:58.016 18588.393 - 18707.549: 93.5510% ( 2) 00:10:58.016 18707.549 - 18826.705: 93.5609% ( 1) 00:10:58.016 18826.705 - 18945.862: 93.5908% ( 3) 00:10:58.016 18945.862 - 19065.018: 93.6107% ( 2) 00:10:58.016 19065.018 - 19184.175: 93.6206% ( 1) 00:10:58.016 19184.175 - 19303.331: 93.6306% ( 1) 00:10:58.016 19779.956 - 19899.113: 93.6505% ( 2) 00:10:58.016 19899.113 - 20018.269: 93.6803% ( 3) 00:10:58.016 20018.269 - 20137.425: 93.7400% ( 6) 00:10:58.016 20137.425 - 20256.582: 93.8097% ( 7) 00:10:58.016 20256.582 - 20375.738: 93.8694% ( 6) 00:10:58.016 20375.738 - 20494.895: 93.9490% ( 8) 00:10:58.016 20494.895 - 20614.051: 94.0088% ( 6) 00:10:58.016 20614.051 - 20733.207: 94.0585% ( 5) 00:10:58.016 20733.207 - 20852.364: 94.1580% ( 10) 00:10:58.016 20852.364 - 20971.520: 94.2874% ( 13) 00:10:58.016 20971.520 - 21090.676: 94.3869% ( 10) 00:10:58.016 21090.676 - 21209.833: 94.5462% ( 16) 00:10:58.016 21209.833 - 21328.989: 94.6357% ( 9) 00:10:58.016 21328.989 - 21448.145: 94.7353% ( 10) 00:10:58.016 21448.145 - 21567.302: 94.8049% ( 7) 00:10:58.016 21567.302 - 21686.458: 94.8746% ( 7) 00:10:58.016 21686.458 - 21805.615: 94.9741% ( 10) 00:10:58.016 21805.615 - 21924.771: 95.2030% ( 23) 00:10:58.016 21924.771 - 22043.927: 95.3324% ( 13) 00:10:58.016 22043.927 - 22163.084: 95.4518% ( 12) 00:10:58.016 22163.084 - 22282.240: 95.5713% ( 12) 00:10:58.016 22282.240 - 22401.396: 95.6907% ( 12) 00:10:58.016 22401.396 - 22520.553: 95.8201% ( 13) 00:10:58.016 22520.553 - 22639.709: 95.9196% ( 10) 00:10:58.016 22639.709 - 22758.865: 96.0390% ( 12) 00:10:58.016 22758.865 - 22878.022: 96.1485% ( 11) 00:10:58.016 22878.022 - 22997.178: 96.2580% ( 11) 00:10:58.016 22997.178 - 23116.335: 96.3774% ( 12) 00:10:58.016 23116.335 - 23235.491: 96.6162% ( 24) 00:10:58.016 23235.491 - 23354.647: 96.8949% ( 28) 00:10:58.016 23354.647 - 23473.804: 97.0342% ( 14) 00:10:58.016 23473.804 - 23592.960: 97.1039% ( 7) 00:10:58.016 23592.960 - 23712.116: 97.1537% ( 5) 00:10:58.016 23712.116 - 23831.273: 97.1935% ( 4) 00:10:58.016 23831.273 - 23950.429: 97.2233% ( 3) 00:10:58.016 23950.429 - 24069.585: 97.2532% ( 3) 00:10:58.016 24069.585 - 24188.742: 97.2830% ( 3) 00:10:58.016 24188.742 - 24307.898: 97.3029% ( 2) 00:10:58.016 24307.898 - 24427.055: 97.3229% ( 2) 00:10:58.016 24427.055 - 24546.211: 97.3428% ( 2) 00:10:58.016 24546.211 - 24665.367: 97.3627% ( 2) 00:10:58.016 24665.367 - 24784.524: 97.3925% ( 3) 00:10:58.016 24784.524 - 24903.680: 97.4124% ( 2) 00:10:58.016 24903.680 - 25022.836: 97.4323% ( 2) 00:10:58.016 25022.836 - 25141.993: 97.4522% ( 2) 00:10:58.016 25856.931 - 25976.087: 97.4721% ( 2) 00:10:58.016 25976.087 - 26095.244: 97.5219% ( 5) 00:10:58.016 26095.244 - 26214.400: 97.6015% ( 8) 00:10:58.016 26214.400 - 26333.556: 97.7408% ( 14) 00:10:58.016 26333.556 - 26452.713: 97.8404% ( 10) 00:10:58.016 26452.713 - 26571.869: 97.9697% ( 13) 00:10:58.016 26571.869 - 26691.025: 98.0892% ( 12) 00:10:58.016 26691.025 - 26810.182: 98.1787% ( 9) 00:10:58.016 26810.182 - 26929.338: 98.2683% ( 9) 00:10:58.016 26929.338 - 27048.495: 98.3877% ( 12) 00:10:58.016 27048.495 - 27167.651: 98.4873% ( 10) 00:10:58.016 27167.651 - 27286.807: 98.5967% ( 11) 00:10:58.016 27286.807 - 27405.964: 98.6564% ( 6) 00:10:58.016 27405.964 - 27525.120: 98.7062% ( 5) 00:10:58.016 27525.120 - 27644.276: 98.7659% ( 6) 00:10:58.016 27644.276 - 27763.433: 98.8157% ( 5) 00:10:58.016 27763.433 - 27882.589: 98.8654% ( 5) 00:10:58.016 27882.589 - 28001.745: 98.9053% ( 4) 00:10:58.016 28001.745 - 28120.902: 98.9451% ( 4) 00:10:58.016 28120.902 - 28240.058: 98.9948% ( 5) 00:10:58.016 28240.058 - 28359.215: 99.0346% ( 4) 00:10:58.016 28359.215 - 28478.371: 99.0744% ( 4) 00:10:58.016 28478.371 - 28597.527: 99.1342% ( 6) 00:10:58.016 28597.527 - 28716.684: 99.1640% ( 3) 00:10:58.016 28716.684 - 28835.840: 99.1740% ( 1) 00:10:58.016 28835.840 - 28954.996: 99.1939% ( 2) 00:10:58.016 28954.996 - 29074.153: 99.2038% ( 1) 00:10:58.016 29074.153 - 29193.309: 99.2237% ( 2) 00:10:58.016 29193.309 - 29312.465: 99.2436% ( 2) 00:10:58.016 29312.465 - 29431.622: 99.2635% ( 2) 00:10:58.016 29431.622 - 29550.778: 99.2834% ( 2) 00:10:58.016 29550.778 - 29669.935: 99.3033% ( 2) 00:10:58.016 29669.935 - 29789.091: 99.3232% ( 2) 00:10:58.016 29789.091 - 29908.247: 99.3432% ( 2) 00:10:58.016 29908.247 - 30027.404: 99.3631% ( 2) 00:10:58.016 33602.095 - 33840.407: 99.4029% ( 4) 00:10:58.016 33840.407 - 34078.720: 99.4725% ( 7) 00:10:58.016 34078.720 - 34317.033: 99.5322% ( 6) 00:10:58.016 34317.033 - 34555.345: 99.5920% ( 6) 00:10:58.016 34555.345 - 34793.658: 99.6517% ( 6) 00:10:58.016 34793.658 - 35031.971: 99.7114% ( 6) 00:10:58.016 35031.971 - 35270.284: 99.7711% ( 6) 00:10:58.016 35270.284 - 35508.596: 99.8308% ( 6) 00:10:58.016 35508.596 - 35746.909: 99.8905% ( 6) 00:10:58.016 35746.909 - 35985.222: 99.9403% ( 5) 00:10:58.016 35985.222 - 36223.535: 100.0000% ( 6) 00:10:58.016 00:10:58.016 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:58.016 ============================================================================== 00:10:58.016 Range in us Cumulative IO count 00:10:58.016 9770.822 - 9830.400: 0.0100% ( 1) 00:10:58.016 9830.400 - 9889.978: 0.0498% ( 4) 00:10:58.016 9889.978 - 9949.556: 0.1592% ( 11) 00:10:58.016 9949.556 - 10009.135: 0.3384% ( 18) 00:10:58.016 10009.135 - 10068.713: 0.5474% ( 21) 00:10:58.016 10068.713 - 10128.291: 0.8161% ( 27) 00:10:58.016 10128.291 - 10187.869: 1.0549% ( 24) 00:10:58.016 10187.869 - 10247.447: 1.4630% ( 41) 00:10:58.016 10247.447 - 10307.025: 2.2492% ( 79) 00:10:58.016 10307.025 - 10366.604: 2.9160% ( 67) 00:10:58.016 10366.604 - 10426.182: 3.6425% ( 73) 00:10:58.016 10426.182 - 10485.760: 4.6079% ( 97) 00:10:58.016 10485.760 - 10545.338: 5.5932% ( 99) 00:10:58.016 10545.338 - 10604.916: 6.8670% ( 128) 00:10:58.016 10604.916 - 10664.495: 8.2803% ( 142) 00:10:58.016 10664.495 - 10724.073: 9.6636% ( 139) 00:10:58.016 10724.073 - 10783.651: 11.3953% ( 174) 00:10:58.016 10783.651 - 10843.229: 13.4853% ( 210) 00:10:58.016 10843.229 - 10902.807: 15.7245% ( 225) 00:10:58.016 10902.807 - 10962.385: 18.3320% ( 262) 00:10:58.016 10962.385 - 11021.964: 21.2082% ( 289) 00:10:58.016 11021.964 - 11081.542: 24.3631% ( 317) 00:10:58.016 11081.542 - 11141.120: 27.4383% ( 309) 00:10:58.016 11141.120 - 11200.698: 31.0510% ( 363) 00:10:58.017 11200.698 - 11260.276: 34.0665% ( 303) 00:10:58.017 11260.276 - 11319.855: 37.2412% ( 319) 00:10:58.017 11319.855 - 11379.433: 40.8041% ( 358) 00:10:58.017 11379.433 - 11439.011: 43.8097% ( 302) 00:10:58.017 11439.011 - 11498.589: 46.5565% ( 276) 00:10:58.017 11498.589 - 11558.167: 49.2735% ( 273) 00:10:58.017 11558.167 - 11617.745: 52.0900% ( 283) 00:10:58.017 11617.745 - 11677.324: 55.0159% ( 294) 00:10:58.017 11677.324 - 11736.902: 57.5239% ( 252) 00:10:58.017 11736.902 - 11796.480: 59.8229% ( 231) 00:10:58.017 11796.480 - 11856.058: 62.0521% ( 224) 00:10:58.017 11856.058 - 11915.636: 63.9729% ( 193) 00:10:58.017 11915.636 - 11975.215: 65.6748% ( 171) 00:10:58.017 11975.215 - 12034.793: 66.9785% ( 131) 00:10:58.017 12034.793 - 12094.371: 68.2524% ( 128) 00:10:58.017 12094.371 - 12153.949: 69.3372% ( 109) 00:10:58.017 12153.949 - 12213.527: 70.9096% ( 158) 00:10:58.017 12213.527 - 12273.105: 71.9347% ( 103) 00:10:58.017 12273.105 - 12332.684: 72.8205% ( 89) 00:10:58.017 12332.684 - 12392.262: 73.5868% ( 77) 00:10:58.017 12392.262 - 12451.840: 74.3531% ( 77) 00:10:58.017 12451.840 - 12511.418: 75.1791% ( 83) 00:10:58.017 12511.418 - 12570.996: 75.7862% ( 61) 00:10:58.017 12570.996 - 12630.575: 76.2938% ( 51) 00:10:58.017 12630.575 - 12690.153: 76.8710% ( 58) 00:10:58.017 12690.153 - 12749.731: 77.5975% ( 73) 00:10:58.017 12749.731 - 12809.309: 78.4136% ( 82) 00:10:58.017 12809.309 - 12868.887: 79.2098% ( 80) 00:10:58.017 12868.887 - 12928.465: 79.9363% ( 73) 00:10:58.017 12928.465 - 12988.044: 80.4936% ( 56) 00:10:58.017 12988.044 - 13047.622: 81.1007% ( 61) 00:10:58.017 13047.622 - 13107.200: 81.6779% ( 58) 00:10:58.017 13107.200 - 13166.778: 82.2950% ( 62) 00:10:58.017 13166.778 - 13226.356: 82.7528% ( 46) 00:10:58.017 13226.356 - 13285.935: 83.0016% ( 25) 00:10:58.017 13285.935 - 13345.513: 83.2604% ( 26) 00:10:58.017 13345.513 - 13405.091: 83.5191% ( 26) 00:10:58.017 13405.091 - 13464.669: 83.7580% ( 24) 00:10:58.017 13464.669 - 13524.247: 84.0167% ( 26) 00:10:58.017 13524.247 - 13583.825: 84.2556% ( 24) 00:10:58.017 13583.825 - 13643.404: 84.5342% ( 28) 00:10:58.017 13643.404 - 13702.982: 84.8527% ( 32) 00:10:58.017 13702.982 - 13762.560: 85.0916% ( 24) 00:10:58.017 13762.560 - 13822.138: 85.3006% ( 21) 00:10:58.017 13822.138 - 13881.716: 85.4598% ( 16) 00:10:58.017 13881.716 - 13941.295: 85.6588% ( 20) 00:10:58.017 13941.295 - 14000.873: 85.8380% ( 18) 00:10:58.017 14000.873 - 14060.451: 86.0072% ( 17) 00:10:58.017 14060.451 - 14120.029: 86.1863% ( 18) 00:10:58.017 14120.029 - 14179.607: 86.3854% ( 20) 00:10:58.017 14179.607 - 14239.185: 86.5346% ( 15) 00:10:58.017 14239.185 - 14298.764: 86.7237% ( 19) 00:10:58.017 14298.764 - 14358.342: 86.9626% ( 24) 00:10:58.017 14358.342 - 14417.920: 87.2114% ( 25) 00:10:58.017 14417.920 - 14477.498: 87.5498% ( 34) 00:10:58.017 14477.498 - 14537.076: 87.7986% ( 25) 00:10:58.017 14537.076 - 14596.655: 88.0772% ( 28) 00:10:58.017 14596.655 - 14656.233: 88.3559% ( 28) 00:10:58.017 14656.233 - 14715.811: 88.7639% ( 41) 00:10:58.017 14715.811 - 14775.389: 89.0725% ( 31) 00:10:58.017 14775.389 - 14834.967: 89.4208% ( 35) 00:10:58.017 14834.967 - 14894.545: 89.8189% ( 40) 00:10:58.017 14894.545 - 14954.124: 90.1672% ( 35) 00:10:58.017 14954.124 - 15013.702: 90.4757% ( 31) 00:10:58.017 15013.702 - 15073.280: 90.7643% ( 29) 00:10:58.017 15073.280 - 15132.858: 91.0330% ( 27) 00:10:58.017 15132.858 - 15192.436: 91.3316% ( 30) 00:10:58.017 15192.436 - 15252.015: 91.5406% ( 21) 00:10:58.017 15252.015 - 15371.171: 91.9486% ( 41) 00:10:58.017 15371.171 - 15490.327: 92.1975% ( 25) 00:10:58.017 15490.327 - 15609.484: 92.3069% ( 11) 00:10:58.017 15609.484 - 15728.640: 92.3965% ( 9) 00:10:58.017 15728.640 - 15847.796: 92.4463% ( 5) 00:10:58.017 15847.796 - 15966.953: 92.5358% ( 9) 00:10:58.017 15966.953 - 16086.109: 92.6055% ( 7) 00:10:58.017 16086.109 - 16205.265: 92.6851% ( 8) 00:10:58.017 16205.265 - 16324.422: 92.7548% ( 7) 00:10:58.017 16324.422 - 16443.578: 92.8244% ( 7) 00:10:58.017 16443.578 - 16562.735: 92.9041% ( 8) 00:10:58.017 16562.735 - 16681.891: 92.9936% ( 9) 00:10:58.017 16681.891 - 16801.047: 93.0633% ( 7) 00:10:58.017 16801.047 - 16920.204: 93.1330% ( 7) 00:10:58.017 16920.204 - 17039.360: 93.2026% ( 7) 00:10:58.017 17039.360 - 17158.516: 93.2424% ( 4) 00:10:58.017 17158.516 - 17277.673: 93.3021% ( 6) 00:10:58.017 17277.673 - 17396.829: 93.3718% ( 7) 00:10:58.017 17396.829 - 17515.985: 93.4116% ( 4) 00:10:58.017 17515.985 - 17635.142: 93.4216% ( 1) 00:10:58.017 17635.142 - 17754.298: 93.4415% ( 2) 00:10:58.017 17754.298 - 17873.455: 93.4514% ( 1) 00:10:58.017 17873.455 - 17992.611: 93.4614% ( 1) 00:10:58.017 17992.611 - 18111.767: 93.4813% ( 2) 00:10:58.017 18111.767 - 18230.924: 93.4912% ( 1) 00:10:58.017 18230.924 - 18350.080: 93.5111% ( 2) 00:10:58.017 18350.080 - 18469.236: 93.5211% ( 1) 00:10:58.017 18469.236 - 18588.393: 93.5410% ( 2) 00:10:58.017 18588.393 - 18707.549: 93.5609% ( 2) 00:10:58.017 18707.549 - 18826.705: 93.5709% ( 1) 00:10:58.017 18826.705 - 18945.862: 93.5908% ( 2) 00:10:58.017 18945.862 - 19065.018: 93.6007% ( 1) 00:10:58.017 19065.018 - 19184.175: 93.6206% ( 2) 00:10:58.017 19184.175 - 19303.331: 93.6306% ( 1) 00:10:58.017 19660.800 - 19779.956: 93.6405% ( 1) 00:10:58.017 19899.113 - 20018.269: 93.6704% ( 3) 00:10:58.017 20018.269 - 20137.425: 93.6903% ( 2) 00:10:58.017 20137.425 - 20256.582: 93.7201% ( 3) 00:10:58.017 20256.582 - 20375.738: 93.7400% ( 2) 00:10:58.017 20375.738 - 20494.895: 93.7699% ( 3) 00:10:58.017 20494.895 - 20614.051: 93.8097% ( 4) 00:10:58.017 20614.051 - 20733.207: 93.8296% ( 2) 00:10:58.017 20733.207 - 20852.364: 93.8495% ( 2) 00:10:58.017 20852.364 - 20971.520: 93.9092% ( 6) 00:10:58.017 20971.520 - 21090.676: 94.1580% ( 25) 00:10:58.017 21090.676 - 21209.833: 94.2277% ( 7) 00:10:58.017 21209.833 - 21328.989: 94.3073% ( 8) 00:10:58.017 21328.989 - 21448.145: 94.4467% ( 14) 00:10:58.017 21448.145 - 21567.302: 94.5760% ( 13) 00:10:58.017 21567.302 - 21686.458: 94.6457% ( 7) 00:10:58.017 21686.458 - 21805.615: 94.7154% ( 7) 00:10:58.017 21805.615 - 21924.771: 94.8547% ( 14) 00:10:58.017 21924.771 - 22043.927: 95.0537% ( 20) 00:10:58.017 22043.927 - 22163.084: 95.3125% ( 26) 00:10:58.017 22163.084 - 22282.240: 95.4220% ( 11) 00:10:58.017 22282.240 - 22401.396: 95.5414% ( 12) 00:10:58.017 22401.396 - 22520.553: 95.6708% ( 13) 00:10:58.017 22520.553 - 22639.709: 95.8300% ( 16) 00:10:58.017 22639.709 - 22758.865: 95.9992% ( 17) 00:10:58.017 22758.865 - 22878.022: 96.1883% ( 19) 00:10:58.017 22878.022 - 22997.178: 96.3575% ( 17) 00:10:58.017 22997.178 - 23116.335: 96.4968% ( 14) 00:10:58.017 23116.335 - 23235.491: 96.6063% ( 11) 00:10:58.017 23235.491 - 23354.647: 96.6760% ( 7) 00:10:58.017 23354.647 - 23473.804: 96.8850% ( 21) 00:10:58.017 23473.804 - 23592.960: 97.0541% ( 17) 00:10:58.017 23592.960 - 23712.116: 97.1537% ( 10) 00:10:58.017 23712.116 - 23831.273: 97.2432% ( 9) 00:10:58.017 23831.273 - 23950.429: 97.3328% ( 9) 00:10:58.017 23950.429 - 24069.585: 97.4224% ( 9) 00:10:58.017 24069.585 - 24188.742: 97.4920% ( 7) 00:10:58.017 24188.742 - 24307.898: 97.5617% ( 7) 00:10:58.017 24307.898 - 24427.055: 97.6115% ( 5) 00:10:58.017 24427.055 - 24546.211: 97.7010% ( 9) 00:10:58.017 24546.211 - 24665.367: 97.9001% ( 20) 00:10:58.017 24665.367 - 24784.524: 97.9797% ( 8) 00:10:58.017 24784.524 - 24903.680: 98.0394% ( 6) 00:10:58.017 24903.680 - 25022.836: 98.0991% ( 6) 00:10:58.017 25022.836 - 25141.993: 98.1688% ( 7) 00:10:58.017 25141.993 - 25261.149: 98.2285% ( 6) 00:10:58.017 25261.149 - 25380.305: 98.2783% ( 5) 00:10:58.017 25380.305 - 25499.462: 98.3380% ( 6) 00:10:58.017 25499.462 - 25618.618: 98.3877% ( 5) 00:10:58.017 25618.618 - 25737.775: 98.4076% ( 2) 00:10:58.017 25737.775 - 25856.931: 98.4176% ( 1) 00:10:58.017 25856.931 - 25976.087: 98.4475% ( 3) 00:10:58.017 25976.087 - 26095.244: 98.4574% ( 1) 00:10:58.017 26095.244 - 26214.400: 98.4972% ( 4) 00:10:58.017 26214.400 - 26333.556: 98.5171% ( 2) 00:10:58.017 26333.556 - 26452.713: 98.5370% ( 2) 00:10:58.017 26452.713 - 26571.869: 98.5868% ( 5) 00:10:58.017 26571.869 - 26691.025: 98.6465% ( 6) 00:10:58.017 26691.025 - 26810.182: 98.7062% ( 6) 00:10:58.017 26810.182 - 26929.338: 98.7560% ( 5) 00:10:58.017 26929.338 - 27048.495: 98.8157% ( 6) 00:10:58.017 27048.495 - 27167.651: 98.8654% ( 5) 00:10:58.017 27167.651 - 27286.807: 98.9451% ( 8) 00:10:58.017 27286.807 - 27405.964: 98.9948% ( 5) 00:10:58.017 27405.964 - 27525.120: 99.0147% ( 2) 00:10:58.017 27525.120 - 27644.276: 99.0247% ( 1) 00:10:58.017 27644.276 - 27763.433: 99.0446% ( 2) 00:10:58.017 27763.433 - 27882.589: 99.0545% ( 1) 00:10:58.017 27882.589 - 28001.745: 99.0744% ( 2) 00:10:58.017 28001.745 - 28120.902: 99.0943% ( 2) 00:10:58.017 28120.902 - 28240.058: 99.1043% ( 1) 00:10:58.017 28240.058 - 28359.215: 99.1242% ( 2) 00:10:58.017 28359.215 - 28478.371: 99.1342% ( 1) 00:10:58.017 28478.371 - 28597.527: 99.1541% ( 2) 00:10:58.017 28597.527 - 28716.684: 99.1640% ( 1) 00:10:58.017 28716.684 - 28835.840: 99.1839% ( 2) 00:10:58.017 28835.840 - 28954.996: 99.1939% ( 1) 00:10:58.017 28954.996 - 29074.153: 99.2138% ( 2) 00:10:58.017 29074.153 - 29193.309: 99.2237% ( 1) 00:10:58.018 29193.309 - 29312.465: 99.2436% ( 2) 00:10:58.018 29312.465 - 29431.622: 99.2635% ( 2) 00:10:58.018 29431.622 - 29550.778: 99.2735% ( 1) 00:10:58.018 29550.778 - 29669.935: 99.2934% ( 2) 00:10:58.018 29669.935 - 29789.091: 99.3033% ( 1) 00:10:58.018 29789.091 - 29908.247: 99.3232% ( 2) 00:10:58.018 29908.247 - 30027.404: 99.3432% ( 2) 00:10:58.018 30027.404 - 30146.560: 99.3631% ( 2) 00:10:58.018 30504.029 - 30742.342: 99.4128% ( 5) 00:10:58.018 30742.342 - 30980.655: 99.4825% ( 7) 00:10:58.018 30980.655 - 31218.967: 99.5322% ( 5) 00:10:58.018 31218.967 - 31457.280: 99.5820% ( 5) 00:10:58.018 31457.280 - 31695.593: 99.6417% ( 6) 00:10:58.018 31695.593 - 31933.905: 99.7014% ( 6) 00:10:58.018 31933.905 - 32172.218: 99.7611% ( 6) 00:10:58.018 32172.218 - 32410.531: 99.8209% ( 6) 00:10:58.018 32410.531 - 32648.844: 99.8806% ( 6) 00:10:58.018 32648.844 - 32887.156: 99.9403% ( 6) 00:10:58.018 32887.156 - 33125.469: 100.0000% ( 6) 00:10:58.018 00:10:58.018 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:58.018 ============================================================================== 00:10:58.018 Range in us Cumulative IO count 00:10:58.018 9770.822 - 9830.400: 0.0398% ( 4) 00:10:58.018 9830.400 - 9889.978: 0.1294% ( 9) 00:10:58.018 9889.978 - 9949.556: 0.2189% ( 9) 00:10:58.018 9949.556 - 10009.135: 0.3384% ( 12) 00:10:58.018 10009.135 - 10068.713: 0.5872% ( 25) 00:10:58.018 10068.713 - 10128.291: 0.7962% ( 21) 00:10:58.018 10128.291 - 10187.869: 1.0748% ( 28) 00:10:58.018 10187.869 - 10247.447: 1.4431% ( 37) 00:10:58.018 10247.447 - 10307.025: 1.8213% ( 38) 00:10:58.018 10307.025 - 10366.604: 2.5179% ( 70) 00:10:58.018 10366.604 - 10426.182: 3.5430% ( 103) 00:10:58.018 10426.182 - 10485.760: 4.5780% ( 104) 00:10:58.018 10485.760 - 10545.338: 5.7424% ( 117) 00:10:58.018 10545.338 - 10604.916: 7.2154% ( 148) 00:10:58.018 10604.916 - 10664.495: 9.2456% ( 204) 00:10:58.018 10664.495 - 10724.073: 10.9674% ( 173) 00:10:58.018 10724.073 - 10783.651: 12.5100% ( 155) 00:10:58.018 10783.651 - 10843.229: 14.2914% ( 179) 00:10:58.018 10843.229 - 10902.807: 16.3217% ( 204) 00:10:58.018 10902.807 - 10962.385: 18.4514% ( 214) 00:10:58.018 10962.385 - 11021.964: 21.5466% ( 311) 00:10:58.018 11021.964 - 11081.542: 24.3730% ( 284) 00:10:58.018 11081.542 - 11141.120: 27.1596% ( 280) 00:10:58.018 11141.120 - 11200.698: 30.4936% ( 335) 00:10:58.018 11200.698 - 11260.276: 33.5987% ( 312) 00:10:58.018 11260.276 - 11319.855: 37.7189% ( 414) 00:10:58.018 11319.855 - 11379.433: 40.5553% ( 285) 00:10:58.018 11379.433 - 11439.011: 43.7301% ( 319) 00:10:58.018 11439.011 - 11498.589: 46.6959% ( 298) 00:10:58.018 11498.589 - 11558.167: 49.3730% ( 269) 00:10:58.018 11558.167 - 11617.745: 52.0203% ( 266) 00:10:58.018 11617.745 - 11677.324: 54.9264% ( 292) 00:10:58.018 11677.324 - 11736.902: 57.4542% ( 254) 00:10:58.018 11736.902 - 11796.480: 59.6537% ( 221) 00:10:58.018 11796.480 - 11856.058: 61.6640% ( 202) 00:10:58.018 11856.058 - 11915.636: 63.4355% ( 178) 00:10:58.018 11915.636 - 11975.215: 65.0478% ( 162) 00:10:58.018 11975.215 - 12034.793: 66.5506% ( 151) 00:10:58.018 12034.793 - 12094.371: 68.3021% ( 176) 00:10:58.018 12094.371 - 12153.949: 69.5064% ( 121) 00:10:58.018 12153.949 - 12213.527: 70.5314% ( 103) 00:10:58.018 12213.527 - 12273.105: 71.4869% ( 96) 00:10:58.018 12273.105 - 12332.684: 72.3229% ( 84) 00:10:58.018 12332.684 - 12392.262: 73.0991% ( 78) 00:10:58.018 12392.262 - 12451.840: 73.8057% ( 71) 00:10:58.018 12451.840 - 12511.418: 74.5024% ( 70) 00:10:58.018 12511.418 - 12570.996: 75.2189% ( 72) 00:10:58.018 12570.996 - 12630.575: 75.8957% ( 68) 00:10:58.018 12630.575 - 12690.153: 76.7814% ( 89) 00:10:58.018 12690.153 - 12749.731: 77.4283% ( 65) 00:10:58.018 12749.731 - 12809.309: 78.1748% ( 75) 00:10:58.018 12809.309 - 12868.887: 78.8316% ( 66) 00:10:58.018 12868.887 - 12928.465: 79.4088% ( 58) 00:10:58.018 12928.465 - 12988.044: 79.9960% ( 59) 00:10:58.018 12988.044 - 13047.622: 80.5434% ( 55) 00:10:58.018 13047.622 - 13107.200: 81.1903% ( 65) 00:10:58.018 13107.200 - 13166.778: 81.7874% ( 60) 00:10:58.018 13166.778 - 13226.356: 82.2850% ( 50) 00:10:58.018 13226.356 - 13285.935: 82.8324% ( 55) 00:10:58.018 13285.935 - 13345.513: 83.3897% ( 56) 00:10:58.018 13345.513 - 13405.091: 83.8674% ( 48) 00:10:58.018 13405.091 - 13464.669: 84.1561% ( 29) 00:10:58.018 13464.669 - 13524.247: 84.3750% ( 22) 00:10:58.018 13524.247 - 13583.825: 84.5641% ( 19) 00:10:58.018 13583.825 - 13643.404: 84.7532% ( 19) 00:10:58.018 13643.404 - 13702.982: 84.9622% ( 21) 00:10:58.018 13702.982 - 13762.560: 85.1214% ( 16) 00:10:58.018 13762.560 - 13822.138: 85.3006% ( 18) 00:10:58.018 13822.138 - 13881.716: 85.4498% ( 15) 00:10:58.018 13881.716 - 13941.295: 85.5991% ( 15) 00:10:58.018 13941.295 - 14000.873: 85.7683% ( 17) 00:10:58.018 14000.873 - 14060.451: 85.9275% ( 16) 00:10:58.018 14060.451 - 14120.029: 86.0470% ( 12) 00:10:58.018 14120.029 - 14179.607: 86.1863% ( 14) 00:10:58.018 14179.607 - 14239.185: 86.2958% ( 11) 00:10:58.018 14239.185 - 14298.764: 86.4351% ( 14) 00:10:58.018 14298.764 - 14358.342: 86.6541% ( 22) 00:10:58.018 14358.342 - 14417.920: 86.8730% ( 22) 00:10:58.018 14417.920 - 14477.498: 87.1417% ( 27) 00:10:58.018 14477.498 - 14537.076: 87.3806% ( 24) 00:10:58.018 14537.076 - 14596.655: 87.6692% ( 29) 00:10:58.018 14596.655 - 14656.233: 88.0673% ( 40) 00:10:58.018 14656.233 - 14715.811: 88.5350% ( 47) 00:10:58.018 14715.811 - 14775.389: 89.0824% ( 55) 00:10:58.018 14775.389 - 14834.967: 89.5104% ( 43) 00:10:58.018 14834.967 - 14894.545: 89.8388% ( 33) 00:10:58.018 14894.545 - 14954.124: 90.1771% ( 34) 00:10:58.018 14954.124 - 15013.702: 90.4359% ( 26) 00:10:58.018 15013.702 - 15073.280: 90.7743% ( 34) 00:10:58.018 15073.280 - 15132.858: 91.0131% ( 24) 00:10:58.018 15132.858 - 15192.436: 91.2321% ( 22) 00:10:58.018 15192.436 - 15252.015: 91.4610% ( 23) 00:10:58.018 15252.015 - 15371.171: 91.8591% ( 40) 00:10:58.018 15371.171 - 15490.327: 92.0880% ( 23) 00:10:58.018 15490.327 - 15609.484: 92.1975% ( 11) 00:10:58.018 15609.484 - 15728.640: 92.3069% ( 11) 00:10:58.018 15728.640 - 15847.796: 92.3467% ( 4) 00:10:58.018 15847.796 - 15966.953: 92.4164% ( 7) 00:10:58.018 15966.953 - 16086.109: 92.4662% ( 5) 00:10:58.018 16086.109 - 16205.265: 92.5159% ( 5) 00:10:58.018 16205.265 - 16324.422: 92.5756% ( 6) 00:10:58.018 16324.422 - 16443.578: 92.6254% ( 5) 00:10:58.018 16443.578 - 16562.735: 92.6851% ( 6) 00:10:58.018 16562.735 - 16681.891: 92.7548% ( 7) 00:10:58.018 16681.891 - 16801.047: 92.7846% ( 3) 00:10:58.018 16801.047 - 16920.204: 92.8543% ( 7) 00:10:58.018 16920.204 - 17039.360: 92.9140% ( 6) 00:10:58.018 17039.360 - 17158.516: 92.9638% ( 5) 00:10:58.018 17158.516 - 17277.673: 93.0334% ( 7) 00:10:58.018 17277.673 - 17396.829: 93.0832% ( 5) 00:10:58.018 17396.829 - 17515.985: 93.1628% ( 8) 00:10:58.018 17515.985 - 17635.142: 93.2623% ( 10) 00:10:58.018 17635.142 - 17754.298: 93.3221% ( 6) 00:10:58.018 17754.298 - 17873.455: 93.3519% ( 3) 00:10:58.018 17873.455 - 17992.611: 93.3917% ( 4) 00:10:58.018 17992.611 - 18111.767: 93.4315% ( 4) 00:10:58.018 18111.767 - 18230.924: 93.4614% ( 3) 00:10:58.018 18230.924 - 18350.080: 93.4813% ( 2) 00:10:58.018 18350.080 - 18469.236: 93.5211% ( 4) 00:10:58.018 18469.236 - 18588.393: 93.5510% ( 3) 00:10:58.018 18588.393 - 18707.549: 93.5808% ( 3) 00:10:58.018 18707.549 - 18826.705: 93.6107% ( 3) 00:10:58.018 18826.705 - 18945.862: 93.6306% ( 2) 00:10:58.018 20137.425 - 20256.582: 93.9689% ( 34) 00:10:58.018 20256.582 - 20375.738: 94.0486% ( 8) 00:10:58.018 20375.738 - 20494.895: 94.1182% ( 7) 00:10:58.018 20494.895 - 20614.051: 94.2277% ( 11) 00:10:58.018 20614.051 - 20733.207: 94.3372% ( 11) 00:10:58.018 20733.207 - 20852.364: 94.4666% ( 13) 00:10:58.018 20852.364 - 20971.520: 94.6855% ( 22) 00:10:58.018 20971.520 - 21090.676: 94.8846% ( 20) 00:10:58.018 21090.676 - 21209.833: 94.9940% ( 11) 00:10:58.018 21209.833 - 21328.989: 95.1533% ( 16) 00:10:58.018 21328.989 - 21448.145: 95.3025% ( 15) 00:10:58.018 21448.145 - 21567.302: 95.3822% ( 8) 00:10:58.018 21567.302 - 21686.458: 95.4916% ( 11) 00:10:58.018 21686.458 - 21805.615: 95.5812% ( 9) 00:10:58.018 21805.615 - 21924.771: 95.6708% ( 9) 00:10:58.018 21924.771 - 22043.927: 95.7604% ( 9) 00:10:58.018 22043.927 - 22163.084: 95.8499% ( 9) 00:10:58.018 22163.084 - 22282.240: 95.9395% ( 9) 00:10:58.018 22282.240 - 22401.396: 96.0589% ( 12) 00:10:58.018 22401.396 - 22520.553: 96.2281% ( 17) 00:10:58.018 22520.553 - 22639.709: 96.4271% ( 20) 00:10:58.018 22639.709 - 22758.865: 96.5267% ( 10) 00:10:58.018 22758.865 - 22878.022: 96.6361% ( 11) 00:10:58.018 22878.022 - 22997.178: 96.7257% ( 9) 00:10:58.018 22997.178 - 23116.335: 96.8252% ( 10) 00:10:58.018 23116.335 - 23235.491: 96.9049% ( 8) 00:10:58.018 23235.491 - 23354.647: 96.9845% ( 8) 00:10:58.018 23354.647 - 23473.804: 97.0442% ( 6) 00:10:58.018 23473.804 - 23592.960: 97.2134% ( 17) 00:10:58.018 23592.960 - 23712.116: 97.3726% ( 16) 00:10:58.018 23712.116 - 23831.273: 97.4224% ( 5) 00:10:58.018 23831.273 - 23950.429: 97.4920% ( 7) 00:10:58.018 23950.429 - 24069.585: 97.5518% ( 6) 00:10:58.018 24069.585 - 24188.742: 97.6314% ( 8) 00:10:58.018 24188.742 - 24307.898: 97.6911% ( 6) 00:10:58.018 24307.898 - 24427.055: 97.7707% ( 8) 00:10:58.018 24427.055 - 24546.211: 97.8503% ( 8) 00:10:58.018 24546.211 - 24665.367: 97.9697% ( 12) 00:10:58.018 24665.367 - 24784.524: 98.0295% ( 6) 00:10:58.018 24784.524 - 24903.680: 98.0892% ( 6) 00:10:58.019 24903.680 - 25022.836: 98.1588% ( 7) 00:10:58.019 25022.836 - 25141.993: 98.2086% ( 5) 00:10:58.019 25141.993 - 25261.149: 98.2584% ( 5) 00:10:58.019 25261.149 - 25380.305: 98.2982% ( 4) 00:10:58.019 25380.305 - 25499.462: 98.3479% ( 5) 00:10:58.019 25499.462 - 25618.618: 98.3877% ( 4) 00:10:58.019 25618.618 - 25737.775: 98.4375% ( 5) 00:10:58.019 25737.775 - 25856.931: 98.4873% ( 5) 00:10:58.019 25856.931 - 25976.087: 98.5370% ( 5) 00:10:58.019 25976.087 - 26095.244: 98.6465% ( 11) 00:10:58.019 26095.244 - 26214.400: 98.6664% ( 2) 00:10:58.019 26214.400 - 26333.556: 98.6863% ( 2) 00:10:58.019 26333.556 - 26452.713: 98.7361% ( 5) 00:10:58.019 26452.713 - 26571.869: 98.7958% ( 6) 00:10:58.019 26571.869 - 26691.025: 98.8356% ( 4) 00:10:58.019 26691.025 - 26810.182: 98.8953% ( 6) 00:10:58.019 26810.182 - 26929.338: 98.9351% ( 4) 00:10:58.019 26929.338 - 27048.495: 98.9550% ( 2) 00:10:58.019 27048.495 - 27167.651: 98.9849% ( 3) 00:10:58.019 27167.651 - 27286.807: 99.0446% ( 6) 00:10:58.019 27286.807 - 27405.964: 99.0545% ( 1) 00:10:58.019 27405.964 - 27525.120: 99.0645% ( 1) 00:10:58.019 27525.120 - 27644.276: 99.1143% ( 5) 00:10:58.019 27644.276 - 27763.433: 99.1541% ( 4) 00:10:58.019 27763.433 - 27882.589: 99.2038% ( 5) 00:10:58.019 27882.589 - 28001.745: 99.2536% ( 5) 00:10:58.019 28001.745 - 28120.902: 99.2934% ( 4) 00:10:58.019 28120.902 - 28240.058: 99.3332% ( 4) 00:10:58.019 28240.058 - 28359.215: 99.3830% ( 5) 00:10:58.019 28359.215 - 28478.371: 99.4327% ( 5) 00:10:58.019 28478.371 - 28597.527: 99.4924% ( 6) 00:10:58.019 28597.527 - 28716.684: 99.5322% ( 4) 00:10:58.019 28716.684 - 28835.840: 99.5820% ( 5) 00:10:58.019 28835.840 - 28954.996: 99.6318% ( 5) 00:10:58.019 28954.996 - 29074.153: 99.6815% ( 5) 00:10:58.019 29074.153 - 29193.309: 99.7313% ( 5) 00:10:58.019 29193.309 - 29312.465: 99.7811% ( 5) 00:10:58.019 29312.465 - 29431.622: 99.8308% ( 5) 00:10:58.019 29431.622 - 29550.778: 99.8507% ( 2) 00:10:58.019 29550.778 - 29669.935: 99.8806% ( 3) 00:10:58.019 29669.935 - 29789.091: 99.9104% ( 3) 00:10:58.019 29789.091 - 29908.247: 99.9403% ( 3) 00:10:58.019 29908.247 - 30027.404: 99.9701% ( 3) 00:10:58.019 30027.404 - 30146.560: 100.0000% ( 3) 00:10:58.019 00:10:58.019 11:35:56 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:58.019 ************************************ 00:10:58.019 END TEST nvme_perf 00:10:58.019 ************************************ 00:10:58.019 00:10:58.019 real 0m2.765s 00:10:58.019 user 0m2.328s 00:10:58.019 sys 0m0.320s 00:10:58.019 11:35:56 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.019 11:35:56 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:58.019 11:35:56 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:58.019 11:35:56 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:58.019 11:35:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.019 11:35:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:58.019 ************************************ 00:10:58.019 START TEST nvme_hello_world 00:10:58.019 ************************************ 00:10:58.019 11:35:56 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:58.277 Initializing NVMe Controllers 00:10:58.277 Attached to 0000:00:10.0 00:10:58.277 Namespace ID: 1 size: 6GB 00:10:58.277 Attached to 0000:00:11.0 00:10:58.277 Namespace ID: 1 size: 5GB 00:10:58.277 Attached to 0000:00:13.0 00:10:58.277 Namespace ID: 1 size: 1GB 00:10:58.277 Attached to 0000:00:12.0 00:10:58.277 Namespace ID: 1 size: 4GB 00:10:58.277 Namespace ID: 2 size: 4GB 00:10:58.277 Namespace ID: 3 size: 4GB 00:10:58.277 Initialization complete. 00:10:58.277 INFO: using host memory buffer for IO 00:10:58.277 Hello world! 00:10:58.277 INFO: using host memory buffer for IO 00:10:58.277 Hello world! 00:10:58.277 INFO: using host memory buffer for IO 00:10:58.277 Hello world! 00:10:58.277 INFO: using host memory buffer for IO 00:10:58.277 Hello world! 00:10:58.277 INFO: using host memory buffer for IO 00:10:58.277 Hello world! 00:10:58.277 INFO: using host memory buffer for IO 00:10:58.277 Hello world! 00:10:58.277 ************************************ 00:10:58.277 END TEST nvme_hello_world 00:10:58.277 ************************************ 00:10:58.277 00:10:58.277 real 0m0.352s 00:10:58.277 user 0m0.137s 00:10:58.277 sys 0m0.167s 00:10:58.277 11:35:57 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.277 11:35:57 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:58.277 11:35:57 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:58.277 11:35:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:58.277 11:35:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.278 11:35:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:58.278 ************************************ 00:10:58.278 START TEST nvme_sgl 00:10:58.278 ************************************ 00:10:58.278 11:35:57 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:58.545 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:58.546 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:58.821 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:58.821 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:58.821 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:58.822 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:58.822 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:58.822 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:58.822 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:58.822 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:58.822 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:58.822 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:58.822 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:58.822 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:58.822 NVMe Readv/Writev Request test 00:10:58.822 Attached to 0000:00:10.0 00:10:58.822 Attached to 0000:00:11.0 00:10:58.822 Attached to 0000:00:13.0 00:10:58.822 Attached to 0000:00:12.0 00:10:58.822 0000:00:10.0: build_io_request_2 test passed 00:10:58.822 0000:00:10.0: build_io_request_4 test passed 00:10:58.822 0000:00:10.0: build_io_request_5 test passed 00:10:58.822 0000:00:10.0: build_io_request_6 test passed 00:10:58.822 0000:00:10.0: build_io_request_7 test passed 00:10:58.822 0000:00:10.0: build_io_request_10 test passed 00:10:58.822 0000:00:11.0: build_io_request_2 test passed 00:10:58.822 0000:00:11.0: build_io_request_4 test passed 00:10:58.822 0000:00:11.0: build_io_request_5 test passed 00:10:58.822 0000:00:11.0: build_io_request_6 test passed 00:10:58.822 0000:00:11.0: build_io_request_7 test passed 00:10:58.822 0000:00:11.0: build_io_request_10 test passed 00:10:58.822 Cleaning up... 00:10:58.822 00:10:58.822 real 0m0.453s 00:10:58.822 user 0m0.235s 00:10:58.822 sys 0m0.160s 00:10:58.822 11:35:57 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.822 ************************************ 00:10:58.822 END TEST nvme_sgl 00:10:58.822 ************************************ 00:10:58.822 11:35:57 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:58.822 11:35:57 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:58.822 11:35:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:58.822 11:35:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.822 11:35:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:58.822 ************************************ 00:10:58.822 START TEST nvme_e2edp 00:10:58.822 ************************************ 00:10:58.822 11:35:57 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:59.079 NVMe Write/Read with End-to-End data protection test 00:10:59.079 Attached to 0000:00:10.0 00:10:59.079 Attached to 0000:00:11.0 00:10:59.079 Attached to 0000:00:13.0 00:10:59.079 Attached to 0000:00:12.0 00:10:59.079 Cleaning up... 00:10:59.079 00:10:59.079 real 0m0.297s 00:10:59.079 user 0m0.109s 00:10:59.079 sys 0m0.145s 00:10:59.079 11:35:58 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.079 ************************************ 00:10:59.079 END TEST nvme_e2edp 00:10:59.079 ************************************ 00:10:59.079 11:35:58 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:59.337 11:35:58 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:59.337 11:35:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:59.337 11:35:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.337 11:35:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.337 ************************************ 00:10:59.337 START TEST nvme_reserve 00:10:59.337 ************************************ 00:10:59.337 11:35:58 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:59.595 ===================================================== 00:10:59.595 NVMe Controller at PCI bus 0, device 16, function 0 00:10:59.595 ===================================================== 00:10:59.595 Reservations: Not Supported 00:10:59.595 ===================================================== 00:10:59.595 NVMe Controller at PCI bus 0, device 17, function 0 00:10:59.595 ===================================================== 00:10:59.595 Reservations: Not Supported 00:10:59.595 ===================================================== 00:10:59.595 NVMe Controller at PCI bus 0, device 19, function 0 00:10:59.595 ===================================================== 00:10:59.595 Reservations: Not Supported 00:10:59.595 ===================================================== 00:10:59.595 NVMe Controller at PCI bus 0, device 18, function 0 00:10:59.595 ===================================================== 00:10:59.595 Reservations: Not Supported 00:10:59.595 Reservation test passed 00:10:59.595 00:10:59.595 real 0m0.316s 00:10:59.595 user 0m0.107s 00:10:59.595 sys 0m0.165s 00:10:59.595 11:35:58 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.595 ************************************ 00:10:59.595 END TEST nvme_reserve 00:10:59.595 ************************************ 00:10:59.595 11:35:58 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:59.595 11:35:58 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:59.595 11:35:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:59.595 11:35:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.595 11:35:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.595 ************************************ 00:10:59.595 START TEST nvme_err_injection 00:10:59.595 ************************************ 00:10:59.595 11:35:58 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:59.853 NVMe Error Injection test 00:10:59.853 Attached to 0000:00:10.0 00:10:59.853 Attached to 0000:00:11.0 00:10:59.853 Attached to 0000:00:13.0 00:10:59.853 Attached to 0000:00:12.0 00:10:59.853 0000:00:10.0: get features failed as expected 00:10:59.853 0000:00:11.0: get features failed as expected 00:10:59.853 0000:00:13.0: get features failed as expected 00:10:59.853 0000:00:12.0: get features failed as expected 00:10:59.853 0000:00:10.0: get features successfully as expected 00:10:59.853 0000:00:11.0: get features successfully as expected 00:10:59.853 0000:00:13.0: get features successfully as expected 00:10:59.853 0000:00:12.0: get features successfully as expected 00:10:59.853 0000:00:10.0: read failed as expected 00:10:59.853 0000:00:11.0: read failed as expected 00:10:59.853 0000:00:12.0: read failed as expected 00:10:59.853 0000:00:13.0: read failed as expected 00:10:59.853 0000:00:10.0: read successfully as expected 00:10:59.853 0000:00:11.0: read successfully as expected 00:10:59.853 0000:00:13.0: read successfully as expected 00:10:59.853 0000:00:12.0: read successfully as expected 00:10:59.853 Cleaning up... 00:10:59.853 ************************************ 00:10:59.853 END TEST nvme_err_injection 00:10:59.853 ************************************ 00:10:59.853 00:10:59.853 real 0m0.277s 00:10:59.853 user 0m0.106s 00:10:59.853 sys 0m0.130s 00:10:59.853 11:35:58 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.853 11:35:58 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:59.853 11:35:58 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:59.853 11:35:58 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:10:59.853 11:35:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.853 11:35:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.853 ************************************ 00:10:59.853 START TEST nvme_overhead 00:10:59.853 ************************************ 00:10:59.853 11:35:58 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:01.226 Initializing NVMe Controllers 00:11:01.226 Attached to 0000:00:10.0 00:11:01.226 Attached to 0000:00:11.0 00:11:01.226 Attached to 0000:00:13.0 00:11:01.226 Attached to 0000:00:12.0 00:11:01.226 Initialization complete. Launching workers. 00:11:01.226 submit (in ns) avg, min, max = 14946.1, 12707.3, 95322.7 00:11:01.226 complete (in ns) avg, min, max = 10113.4, 8991.4, 95971.4 00:11:01.226 00:11:01.226 Submit histogram 00:11:01.226 ================ 00:11:01.226 Range in us Cumulative Count 00:11:01.226 12.684 - 12.742: 0.0105% ( 1) 00:11:01.226 12.800 - 12.858: 0.0210% ( 1) 00:11:01.226 13.091 - 13.149: 0.0524% ( 3) 00:11:01.226 13.149 - 13.207: 0.1048% ( 5) 00:11:01.226 13.207 - 13.265: 0.4820% ( 36) 00:11:01.226 13.265 - 13.324: 1.5928% ( 106) 00:11:01.226 13.324 - 13.382: 3.3742% ( 170) 00:11:01.226 13.382 - 13.440: 5.8577% ( 237) 00:11:01.226 13.440 - 13.498: 9.1376% ( 313) 00:11:01.226 13.498 - 13.556: 12.4804% ( 319) 00:11:01.226 13.556 - 13.615: 15.6659% ( 304) 00:11:01.226 13.615 - 13.673: 18.4219% ( 263) 00:11:01.226 13.673 - 13.731: 20.7901% ( 226) 00:11:01.226 13.731 - 13.789: 22.4563% ( 159) 00:11:01.226 13.789 - 13.847: 23.6194% ( 111) 00:11:01.226 13.847 - 13.905: 24.6359% ( 97) 00:11:01.226 13.905 - 13.964: 25.6104% ( 93) 00:11:01.226 13.964 - 14.022: 26.5011% ( 85) 00:11:01.226 14.022 - 14.080: 27.1717% ( 64) 00:11:01.226 14.080 - 14.138: 27.7271% ( 53) 00:11:01.226 14.138 - 14.196: 28.2511% ( 50) 00:11:01.226 14.196 - 14.255: 28.9532% ( 67) 00:11:01.226 14.255 - 14.313: 30.3154% ( 130) 00:11:01.226 14.313 - 14.371: 32.7989% ( 237) 00:11:01.226 14.371 - 14.429: 36.0578% ( 311) 00:11:01.226 14.429 - 14.487: 40.5323% ( 427) 00:11:01.226 14.487 - 14.545: 45.7508% ( 498) 00:11:01.226 14.545 - 14.604: 52.2582% ( 621) 00:11:01.226 14.604 - 14.662: 58.5874% ( 604) 00:11:01.226 14.662 - 14.720: 64.3613% ( 551) 00:11:01.226 14.720 - 14.778: 68.4900% ( 394) 00:11:01.226 14.778 - 14.836: 72.1576% ( 350) 00:11:01.226 14.836 - 14.895: 74.8297% ( 255) 00:11:01.226 14.895 - 15.011: 79.0108% ( 399) 00:11:01.226 15.011 - 15.127: 82.2907% ( 313) 00:11:01.226 15.127 - 15.244: 84.7951% ( 239) 00:11:01.226 15.244 - 15.360: 86.3670% ( 150) 00:11:01.226 15.360 - 15.476: 87.6349% ( 121) 00:11:01.226 15.476 - 15.593: 88.5675% ( 89) 00:11:01.226 15.593 - 15.709: 89.0600% ( 47) 00:11:01.226 15.709 - 15.825: 89.4373% ( 36) 00:11:01.226 15.825 - 15.942: 89.7621% ( 31) 00:11:01.226 15.942 - 16.058: 90.0451% ( 27) 00:11:01.226 16.058 - 16.175: 90.2127% ( 16) 00:11:01.226 16.175 - 16.291: 90.3804% ( 16) 00:11:01.226 16.291 - 16.407: 90.6528% ( 26) 00:11:01.226 16.407 - 16.524: 90.9567% ( 29) 00:11:01.226 16.524 - 16.640: 91.5331% ( 55) 00:11:01.226 16.640 - 16.756: 92.2771% ( 71) 00:11:01.226 16.756 - 16.873: 92.8534% ( 55) 00:11:01.226 16.873 - 16.989: 93.1573% ( 29) 00:11:01.226 16.989 - 17.105: 93.4926% ( 32) 00:11:01.226 17.105 - 17.222: 93.8489% ( 34) 00:11:01.226 17.222 - 17.338: 94.0585% ( 20) 00:11:01.226 17.338 - 17.455: 94.2366% ( 17) 00:11:01.226 17.455 - 17.571: 94.3938% ( 15) 00:11:01.226 17.571 - 17.687: 94.5615% ( 16) 00:11:01.226 17.687 - 17.804: 94.6662% ( 10) 00:11:01.226 17.804 - 17.920: 94.7291% ( 6) 00:11:01.226 17.920 - 18.036: 94.7920% ( 6) 00:11:01.226 18.036 - 18.153: 94.8549% ( 6) 00:11:01.226 18.153 - 18.269: 94.9177% ( 6) 00:11:01.226 18.269 - 18.385: 94.9492% ( 3) 00:11:01.226 18.385 - 18.502: 95.0016% ( 5) 00:11:01.226 18.502 - 18.618: 95.0540% ( 5) 00:11:01.226 18.618 - 18.735: 95.0644% ( 1) 00:11:01.226 18.735 - 18.851: 95.1064% ( 4) 00:11:01.226 18.851 - 18.967: 95.1483% ( 4) 00:11:01.226 18.967 - 19.084: 95.1588% ( 1) 00:11:01.226 19.084 - 19.200: 95.1797% ( 2) 00:11:01.226 19.200 - 19.316: 95.2321% ( 5) 00:11:01.226 19.316 - 19.433: 95.2740% ( 4) 00:11:01.226 19.433 - 19.549: 95.3264% ( 5) 00:11:01.226 19.549 - 19.665: 95.4522% ( 12) 00:11:01.226 19.665 - 19.782: 95.5150% ( 6) 00:11:01.226 19.782 - 19.898: 95.5570% ( 4) 00:11:01.226 19.898 - 20.015: 95.6408% ( 8) 00:11:01.226 20.015 - 20.131: 95.7456% ( 10) 00:11:01.226 20.131 - 20.247: 95.9028% ( 15) 00:11:01.226 20.247 - 20.364: 96.0599% ( 15) 00:11:01.226 20.364 - 20.480: 96.1333% ( 7) 00:11:01.226 20.480 - 20.596: 96.3114% ( 17) 00:11:01.226 20.596 - 20.713: 96.3743% ( 6) 00:11:01.226 20.713 - 20.829: 96.5944% ( 21) 00:11:01.226 20.829 - 20.945: 96.7620% ( 16) 00:11:01.226 20.945 - 21.062: 96.8563% ( 9) 00:11:01.226 21.062 - 21.178: 96.9926% ( 13) 00:11:01.226 21.178 - 21.295: 97.0973% ( 10) 00:11:01.226 21.295 - 21.411: 97.2755% ( 17) 00:11:01.226 21.411 - 21.527: 97.4012% ( 12) 00:11:01.226 21.527 - 21.644: 97.6003% ( 19) 00:11:01.226 21.644 - 21.760: 97.6946% ( 9) 00:11:01.226 21.760 - 21.876: 97.8204% ( 12) 00:11:01.226 21.876 - 21.993: 97.8937% ( 7) 00:11:01.226 21.993 - 22.109: 97.9776% ( 8) 00:11:01.226 22.109 - 22.225: 97.9985% ( 2) 00:11:01.226 22.225 - 22.342: 98.1348% ( 13) 00:11:01.227 22.342 - 22.458: 98.1662% ( 3) 00:11:01.227 22.458 - 22.575: 98.2291% ( 6) 00:11:01.227 22.575 - 22.691: 98.2919% ( 6) 00:11:01.227 22.691 - 22.807: 98.3653% ( 7) 00:11:01.227 22.807 - 22.924: 98.4491% ( 8) 00:11:01.227 22.924 - 23.040: 98.4910% ( 4) 00:11:01.227 23.040 - 23.156: 98.5749% ( 8) 00:11:01.227 23.156 - 23.273: 98.6273% ( 5) 00:11:01.227 23.273 - 23.389: 98.6692% ( 4) 00:11:01.227 23.389 - 23.505: 98.7321% ( 6) 00:11:01.227 23.505 - 23.622: 98.7530% ( 2) 00:11:01.227 23.622 - 23.738: 98.7635% ( 1) 00:11:01.227 23.738 - 23.855: 98.7844% ( 2) 00:11:01.227 23.855 - 23.971: 98.8683% ( 8) 00:11:01.227 23.971 - 24.087: 98.9102% ( 4) 00:11:01.227 24.087 - 24.204: 98.9312% ( 2) 00:11:01.227 24.204 - 24.320: 99.0150% ( 8) 00:11:01.227 24.320 - 24.436: 99.0674% ( 5) 00:11:01.227 24.436 - 24.553: 99.1093% ( 4) 00:11:01.227 24.553 - 24.669: 99.1198% ( 1) 00:11:01.227 24.669 - 24.785: 99.1617% ( 4) 00:11:01.227 24.785 - 24.902: 99.1826% ( 2) 00:11:01.227 24.902 - 25.018: 99.2141% ( 3) 00:11:01.227 25.018 - 25.135: 99.2455% ( 3) 00:11:01.227 25.135 - 25.251: 99.2665% ( 2) 00:11:01.227 25.251 - 25.367: 99.2770% ( 1) 00:11:01.227 25.367 - 25.484: 99.3084% ( 3) 00:11:01.227 25.484 - 25.600: 99.3189% ( 1) 00:11:01.227 25.600 - 25.716: 99.3398% ( 2) 00:11:01.227 25.716 - 25.833: 99.3608% ( 2) 00:11:01.227 25.833 - 25.949: 99.4132% ( 5) 00:11:01.227 25.949 - 26.065: 99.4341% ( 2) 00:11:01.227 26.065 - 26.182: 99.4446% ( 1) 00:11:01.227 26.182 - 26.298: 99.4656% ( 2) 00:11:01.227 26.298 - 26.415: 99.4865% ( 2) 00:11:01.227 26.415 - 26.531: 99.5075% ( 2) 00:11:01.227 26.531 - 26.647: 99.5285% ( 2) 00:11:01.227 26.647 - 26.764: 99.5389% ( 1) 00:11:01.227 26.764 - 26.880: 99.5494% ( 1) 00:11:01.227 26.996 - 27.113: 99.5599% ( 1) 00:11:01.227 27.113 - 27.229: 99.5808% ( 2) 00:11:01.227 27.229 - 27.345: 99.5913% ( 1) 00:11:01.227 27.462 - 27.578: 99.6018% ( 1) 00:11:01.227 27.695 - 27.811: 99.6437% ( 4) 00:11:01.227 27.811 - 27.927: 99.6542% ( 1) 00:11:01.227 28.160 - 28.276: 99.6752% ( 2) 00:11:01.227 28.276 - 28.393: 99.6856% ( 1) 00:11:01.227 28.393 - 28.509: 99.6961% ( 1) 00:11:01.227 28.509 - 28.625: 99.7275% ( 3) 00:11:01.227 28.625 - 28.742: 99.7380% ( 1) 00:11:01.227 28.742 - 28.858: 99.7485% ( 1) 00:11:01.227 28.858 - 28.975: 99.7590% ( 1) 00:11:01.227 29.440 - 29.556: 99.7695% ( 1) 00:11:01.227 29.556 - 29.673: 99.7799% ( 1) 00:11:01.227 29.673 - 29.789: 99.7904% ( 1) 00:11:01.227 30.022 - 30.255: 99.8009% ( 1) 00:11:01.227 30.255 - 30.487: 99.8114% ( 1) 00:11:01.227 31.651 - 31.884: 99.8219% ( 1) 00:11:01.227 32.582 - 32.815: 99.8428% ( 2) 00:11:01.227 32.815 - 33.047: 99.8533% ( 1) 00:11:01.227 33.280 - 33.513: 99.8638% ( 1) 00:11:01.227 34.211 - 34.444: 99.8743% ( 1) 00:11:01.227 34.444 - 34.676: 99.8847% ( 1) 00:11:01.227 35.142 - 35.375: 99.8952% ( 1) 00:11:01.227 35.375 - 35.607: 99.9057% ( 1) 00:11:01.227 35.607 - 35.840: 99.9162% ( 1) 00:11:01.227 38.865 - 39.098: 99.9266% ( 1) 00:11:01.227 39.331 - 39.564: 99.9476% ( 2) 00:11:01.227 44.684 - 44.916: 99.9581% ( 1) 00:11:01.227 51.433 - 51.665: 99.9686% ( 1) 00:11:01.227 63.302 - 63.767: 99.9790% ( 1) 00:11:01.227 93.091 - 93.556: 99.9895% ( 1) 00:11:01.227 94.953 - 95.418: 100.0000% ( 1) 00:11:01.227 00:11:01.227 Complete histogram 00:11:01.227 ================== 00:11:01.227 Range in us Cumulative Count 00:11:01.227 8.960 - 9.018: 0.0629% ( 6) 00:11:01.227 9.018 - 9.076: 0.7754% ( 68) 00:11:01.227 9.076 - 9.135: 4.2439% ( 331) 00:11:01.227 9.135 - 9.193: 11.9145% ( 732) 00:11:01.227 9.193 - 9.251: 22.6239% ( 1022) 00:11:01.227 9.251 - 9.309: 33.0923% ( 999) 00:11:01.227 9.309 - 9.367: 41.0458% ( 759) 00:11:01.227 9.367 - 9.425: 46.1385% ( 486) 00:11:01.227 9.425 - 9.484: 49.3660% ( 308) 00:11:01.227 9.484 - 9.542: 52.7612% ( 324) 00:11:01.227 9.542 - 9.600: 56.8270% ( 388) 00:11:01.227 9.600 - 9.658: 61.7625% ( 471) 00:11:01.227 9.658 - 9.716: 67.0544% ( 505) 00:11:01.227 9.716 - 9.775: 71.5289% ( 427) 00:11:01.227 9.775 - 9.833: 74.7459% ( 307) 00:11:01.227 9.833 - 9.891: 77.2189% ( 236) 00:11:01.227 9.891 - 9.949: 78.8641% ( 157) 00:11:01.227 9.949 - 10.007: 80.0587% ( 114) 00:11:01.227 10.007 - 10.065: 80.9389% ( 84) 00:11:01.227 10.065 - 10.124: 81.6515% ( 68) 00:11:01.227 10.124 - 10.182: 82.1754% ( 50) 00:11:01.227 10.182 - 10.240: 82.8356% ( 63) 00:11:01.227 10.240 - 10.298: 83.4224% ( 56) 00:11:01.227 10.298 - 10.356: 84.2398% ( 78) 00:11:01.227 10.356 - 10.415: 85.1829% ( 90) 00:11:01.227 10.415 - 10.473: 85.8325% ( 62) 00:11:01.227 10.473 - 10.531: 86.7023% ( 83) 00:11:01.227 10.531 - 10.589: 87.5092% ( 77) 00:11:01.227 10.589 - 10.647: 88.2846% ( 74) 00:11:01.227 10.647 - 10.705: 88.8295% ( 52) 00:11:01.227 10.705 - 10.764: 89.4058% ( 55) 00:11:01.227 10.764 - 10.822: 89.8879% ( 46) 00:11:01.227 10.822 - 10.880: 90.2127% ( 31) 00:11:01.227 10.880 - 10.938: 90.5166% ( 29) 00:11:01.227 10.938 - 10.996: 90.6633% ( 14) 00:11:01.227 10.996 - 11.055: 90.7891% ( 12) 00:11:01.227 11.055 - 11.113: 90.9148% ( 12) 00:11:01.227 11.113 - 11.171: 90.9882% ( 7) 00:11:01.227 11.171 - 11.229: 91.0510% ( 6) 00:11:01.227 11.229 - 11.287: 91.1244% ( 7) 00:11:01.227 11.287 - 11.345: 91.1663% ( 4) 00:11:01.227 11.345 - 11.404: 91.2816% ( 11) 00:11:01.227 11.404 - 11.462: 91.4388% ( 15) 00:11:01.227 11.462 - 11.520: 91.5540% ( 11) 00:11:01.227 11.520 - 11.578: 91.7007% ( 14) 00:11:01.227 11.578 - 11.636: 91.8265% ( 12) 00:11:01.227 11.636 - 11.695: 91.9313% ( 10) 00:11:01.227 11.695 - 11.753: 92.0570% ( 12) 00:11:01.227 11.753 - 11.811: 92.1304% ( 7) 00:11:01.227 11.811 - 11.869: 92.1723% ( 4) 00:11:01.227 11.869 - 11.927: 92.2351% ( 6) 00:11:01.227 11.927 - 11.985: 92.2666% ( 3) 00:11:01.227 11.985 - 12.044: 92.3609% ( 9) 00:11:01.227 12.044 - 12.102: 92.4342% ( 7) 00:11:01.227 12.102 - 12.160: 92.5076% ( 7) 00:11:01.227 12.160 - 12.218: 92.5600% ( 5) 00:11:01.227 12.218 - 12.276: 92.5914% ( 3) 00:11:01.227 12.276 - 12.335: 92.6648% ( 7) 00:11:01.227 12.335 - 12.393: 92.7172% ( 5) 00:11:01.227 12.393 - 12.451: 92.8010% ( 8) 00:11:01.227 12.451 - 12.509: 92.8115% ( 1) 00:11:01.227 12.509 - 12.567: 92.8324% ( 2) 00:11:01.227 12.567 - 12.625: 92.8534% ( 2) 00:11:01.227 12.742 - 12.800: 92.8848% ( 3) 00:11:01.227 12.858 - 12.916: 92.9163% ( 3) 00:11:01.227 12.916 - 12.975: 92.9268% ( 1) 00:11:01.227 13.149 - 13.207: 92.9372% ( 1) 00:11:01.227 13.265 - 13.324: 92.9687% ( 3) 00:11:01.227 13.382 - 13.440: 92.9791% ( 1) 00:11:01.227 13.440 - 13.498: 92.9896% ( 1) 00:11:01.227 13.556 - 13.615: 93.0211% ( 3) 00:11:01.227 13.673 - 13.731: 93.0420% ( 2) 00:11:01.227 13.905 - 13.964: 93.0735% ( 3) 00:11:01.227 13.964 - 14.022: 93.0839% ( 1) 00:11:01.227 14.080 - 14.138: 93.0944% ( 1) 00:11:01.227 14.138 - 14.196: 93.1154% ( 2) 00:11:01.227 14.196 - 14.255: 93.1259% ( 1) 00:11:01.227 14.255 - 14.313: 93.1678% ( 4) 00:11:01.227 14.313 - 14.371: 93.1782% ( 1) 00:11:01.227 14.371 - 14.429: 93.1992% ( 2) 00:11:01.227 14.429 - 14.487: 93.2726% ( 7) 00:11:01.227 14.487 - 14.545: 93.3040% ( 3) 00:11:01.227 14.545 - 14.604: 93.5031% ( 19) 00:11:01.227 14.604 - 14.662: 93.8489% ( 33) 00:11:01.227 14.662 - 14.720: 94.1737% ( 31) 00:11:01.227 14.720 - 14.778: 94.4357% ( 25) 00:11:01.227 14.778 - 14.836: 94.8130% ( 36) 00:11:01.227 14.836 - 14.895: 95.1168% ( 29) 00:11:01.227 14.895 - 15.011: 95.6827% ( 54) 00:11:01.227 15.011 - 15.127: 96.0285% ( 33) 00:11:01.227 15.127 - 15.244: 96.2171% ( 18) 00:11:01.227 15.244 - 15.360: 96.3429% ( 12) 00:11:01.227 15.360 - 15.476: 96.5105% ( 16) 00:11:01.227 15.476 - 15.593: 96.8354% ( 31) 00:11:01.227 15.593 - 15.709: 97.2231% ( 37) 00:11:01.228 15.709 - 15.825: 97.4641% ( 23) 00:11:01.228 15.825 - 15.942: 97.6108% ( 14) 00:11:01.228 15.942 - 16.058: 97.7680% ( 15) 00:11:01.228 16.058 - 16.175: 97.8204% ( 5) 00:11:01.228 16.175 - 16.291: 97.9252% ( 10) 00:11:01.228 16.291 - 16.407: 97.9881% ( 6) 00:11:01.228 16.407 - 16.524: 98.0300% ( 4) 00:11:01.228 16.524 - 16.640: 98.1348% ( 10) 00:11:01.228 16.640 - 16.756: 98.1557% ( 2) 00:11:01.228 16.756 - 16.873: 98.1872% ( 3) 00:11:01.228 16.873 - 16.989: 98.2500% ( 6) 00:11:01.228 16.989 - 17.105: 98.3234% ( 7) 00:11:01.228 17.105 - 17.222: 98.3653% ( 4) 00:11:01.228 17.222 - 17.338: 98.4806% ( 11) 00:11:01.228 17.338 - 17.455: 98.5958% ( 11) 00:11:01.228 17.455 - 17.571: 98.6692% ( 7) 00:11:01.228 17.571 - 17.687: 98.7844% ( 11) 00:11:01.228 17.687 - 17.804: 98.8892% ( 10) 00:11:01.228 17.804 - 17.920: 98.9521% ( 6) 00:11:01.228 17.920 - 18.036: 98.9626% ( 1) 00:11:01.228 18.036 - 18.153: 99.0464% ( 8) 00:11:01.228 18.153 - 18.269: 99.0569% ( 1) 00:11:01.228 18.385 - 18.502: 99.0779% ( 2) 00:11:01.228 18.502 - 18.618: 99.1198% ( 4) 00:11:01.228 18.618 - 18.735: 99.1722% ( 5) 00:11:01.228 18.735 - 18.851: 99.1931% ( 2) 00:11:01.228 18.851 - 18.967: 99.2141% ( 2) 00:11:01.228 18.967 - 19.084: 99.2560% ( 4) 00:11:01.228 19.084 - 19.200: 99.2665% ( 1) 00:11:01.228 19.200 - 19.316: 99.2770% ( 1) 00:11:01.228 19.316 - 19.433: 99.3189% ( 4) 00:11:01.228 19.433 - 19.549: 99.3398% ( 2) 00:11:01.228 19.549 - 19.665: 99.3608% ( 2) 00:11:01.228 19.665 - 19.782: 99.3713% ( 1) 00:11:01.228 20.131 - 20.247: 99.4027% ( 3) 00:11:01.228 20.364 - 20.480: 99.4237% ( 2) 00:11:01.228 20.480 - 20.596: 99.4551% ( 3) 00:11:01.228 20.596 - 20.713: 99.4656% ( 1) 00:11:01.228 20.829 - 20.945: 99.4761% ( 1) 00:11:01.228 20.945 - 21.062: 99.5180% ( 4) 00:11:01.228 21.178 - 21.295: 99.5389% ( 2) 00:11:01.228 21.527 - 21.644: 99.5494% ( 1) 00:11:01.228 21.644 - 21.760: 99.5599% ( 1) 00:11:01.228 21.760 - 21.876: 99.5913% ( 3) 00:11:01.228 21.876 - 21.993: 99.6018% ( 1) 00:11:01.228 21.993 - 22.109: 99.6123% ( 1) 00:11:01.228 22.109 - 22.225: 99.6228% ( 1) 00:11:01.228 22.458 - 22.575: 99.6332% ( 1) 00:11:01.228 22.575 - 22.691: 99.6437% ( 1) 00:11:01.228 22.807 - 22.924: 99.6542% ( 1) 00:11:01.228 22.924 - 23.040: 99.6647% ( 1) 00:11:01.228 23.156 - 23.273: 99.6752% ( 1) 00:11:01.228 23.505 - 23.622: 99.6856% ( 1) 00:11:01.228 23.622 - 23.738: 99.6961% ( 1) 00:11:01.228 23.738 - 23.855: 99.7066% ( 1) 00:11:01.228 23.855 - 23.971: 99.7275% ( 2) 00:11:01.228 24.320 - 24.436: 99.7380% ( 1) 00:11:01.228 24.436 - 24.553: 99.7485% ( 1) 00:11:01.228 24.553 - 24.669: 99.7590% ( 1) 00:11:01.228 24.669 - 24.785: 99.7695% ( 1) 00:11:01.228 25.135 - 25.251: 99.8009% ( 3) 00:11:01.228 25.251 - 25.367: 99.8323% ( 3) 00:11:01.228 25.367 - 25.484: 99.8428% ( 1) 00:11:01.228 26.182 - 26.298: 99.8533% ( 1) 00:11:01.228 26.531 - 26.647: 99.8638% ( 1) 00:11:01.228 26.764 - 26.880: 99.8743% ( 1) 00:11:01.228 27.695 - 27.811: 99.8847% ( 1) 00:11:01.228 28.975 - 29.091: 99.8952% ( 1) 00:11:01.228 29.091 - 29.207: 99.9057% ( 1) 00:11:01.228 37.004 - 37.236: 99.9162% ( 1) 00:11:01.228 38.400 - 38.633: 99.9266% ( 1) 00:11:01.228 41.658 - 41.891: 99.9371% ( 1) 00:11:01.228 46.545 - 46.778: 99.9476% ( 1) 00:11:01.228 52.131 - 52.364: 99.9581% ( 1) 00:11:01.228 56.087 - 56.320: 99.9686% ( 1) 00:11:01.228 87.040 - 87.505: 99.9790% ( 1) 00:11:01.228 92.160 - 92.625: 99.9895% ( 1) 00:11:01.228 95.884 - 96.349: 100.0000% ( 1) 00:11:01.228 00:11:01.228 00:11:01.228 real 0m1.328s 00:11:01.228 user 0m1.114s 00:11:01.228 sys 0m0.160s 00:11:01.228 ************************************ 00:11:01.228 END TEST nvme_overhead 00:11:01.228 ************************************ 00:11:01.228 11:36:00 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.228 11:36:00 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:01.228 11:36:00 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:01.228 11:36:00 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:11:01.228 11:36:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.228 11:36:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:01.228 ************************************ 00:11:01.228 START TEST nvme_arbitration 00:11:01.228 ************************************ 00:11:01.228 11:36:00 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:05.429 Initializing NVMe Controllers 00:11:05.429 Attached to 0000:00:10.0 00:11:05.429 Attached to 0000:00:11.0 00:11:05.429 Attached to 0000:00:13.0 00:11:05.429 Attached to 0000:00:12.0 00:11:05.429 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:05.429 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:05.429 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:05.429 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:05.429 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:05.429 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:05.429 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:05.429 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:05.429 Initialization complete. Launching workers. 00:11:05.429 Starting thread on core 1 with urgent priority queue 00:11:05.429 Starting thread on core 2 with urgent priority queue 00:11:05.429 Starting thread on core 3 with urgent priority queue 00:11:05.429 Starting thread on core 0 with urgent priority queue 00:11:05.429 QEMU NVMe Ctrl (12340 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:11:05.429 QEMU NVMe Ctrl (12342 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:11:05.429 QEMU NVMe Ctrl (12341 ) core 1: 725.33 IO/s 137.87 secs/100000 ios 00:11:05.429 QEMU NVMe Ctrl (12342 ) core 1: 725.33 IO/s 137.87 secs/100000 ios 00:11:05.429 QEMU NVMe Ctrl (12343 ) core 2: 682.67 IO/s 146.48 secs/100000 ios 00:11:05.429 QEMU NVMe Ctrl (12342 ) core 3: 661.33 IO/s 151.21 secs/100000 ios 00:11:05.429 ======================================================== 00:11:05.429 00:11:05.429 ************************************ 00:11:05.429 END TEST nvme_arbitration 00:11:05.429 ************************************ 00:11:05.429 00:11:05.429 real 0m3.469s 00:11:05.429 user 0m9.400s 00:11:05.429 sys 0m0.173s 00:11:05.429 11:36:03 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.429 11:36:03 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:05.429 11:36:03 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:05.429 11:36:03 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:05.429 11:36:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.429 11:36:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:05.429 ************************************ 00:11:05.429 START TEST nvme_single_aen 00:11:05.429 ************************************ 00:11:05.429 11:36:03 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:05.429 Asynchronous Event Request test 00:11:05.429 Attached to 0000:00:10.0 00:11:05.429 Attached to 0000:00:11.0 00:11:05.429 Attached to 0000:00:13.0 00:11:05.429 Attached to 0000:00:12.0 00:11:05.429 Reset controller to setup AER completions for this process 00:11:05.429 Registering asynchronous event callbacks... 00:11:05.429 Getting orig temperature thresholds of all controllers 00:11:05.429 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:05.429 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:05.429 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:05.429 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:05.429 Setting all controllers temperature threshold low to trigger AER 00:11:05.429 Waiting for all controllers temperature threshold to be set lower 00:11:05.429 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:05.429 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:05.429 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:05.429 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:05.429 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:05.429 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:05.429 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:05.429 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:05.429 Waiting for all controllers to trigger AER and reset threshold 00:11:05.429 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:05.429 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:05.429 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:05.429 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:05.429 Cleaning up... 00:11:05.429 00:11:05.429 real 0m0.343s 00:11:05.429 user 0m0.128s 00:11:05.429 sys 0m0.163s 00:11:05.429 11:36:04 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.429 ************************************ 00:11:05.429 END TEST nvme_single_aen 00:11:05.429 ************************************ 00:11:05.429 11:36:04 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:05.429 11:36:04 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:05.429 11:36:04 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:05.429 11:36:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.429 11:36:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:05.429 ************************************ 00:11:05.429 START TEST nvme_doorbell_aers 00:11:05.429 ************************************ 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:05.429 11:36:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:05.687 [2024-07-25 11:36:04.514947] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:15.678 Executing: test_write_invalid_db 00:11:15.678 Waiting for AER completion... 00:11:15.678 Failure: test_write_invalid_db 00:11:15.678 00:11:15.678 Executing: test_invalid_db_write_overflow_sq 00:11:15.678 Waiting for AER completion... 00:11:15.678 Failure: test_invalid_db_write_overflow_sq 00:11:15.678 00:11:15.678 Executing: test_invalid_db_write_overflow_cq 00:11:15.678 Waiting for AER completion... 00:11:15.678 Failure: test_invalid_db_write_overflow_cq 00:11:15.678 00:11:15.678 11:36:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:15.678 11:36:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:15.678 [2024-07-25 11:36:14.562143] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:25.662 Executing: test_write_invalid_db 00:11:25.662 Waiting for AER completion... 00:11:25.662 Failure: test_write_invalid_db 00:11:25.662 00:11:25.662 Executing: test_invalid_db_write_overflow_sq 00:11:25.662 Waiting for AER completion... 00:11:25.662 Failure: test_invalid_db_write_overflow_sq 00:11:25.662 00:11:25.662 Executing: test_invalid_db_write_overflow_cq 00:11:25.662 Waiting for AER completion... 00:11:25.662 Failure: test_invalid_db_write_overflow_cq 00:11:25.662 00:11:25.662 11:36:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:25.662 11:36:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:25.662 [2024-07-25 11:36:24.582332] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:35.646 Executing: test_write_invalid_db 00:11:35.646 Waiting for AER completion... 00:11:35.646 Failure: test_write_invalid_db 00:11:35.646 00:11:35.646 Executing: test_invalid_db_write_overflow_sq 00:11:35.646 Waiting for AER completion... 00:11:35.646 Failure: test_invalid_db_write_overflow_sq 00:11:35.646 00:11:35.646 Executing: test_invalid_db_write_overflow_cq 00:11:35.646 Waiting for AER completion... 00:11:35.646 Failure: test_invalid_db_write_overflow_cq 00:11:35.646 00:11:35.646 11:36:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:35.646 11:36:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:35.646 [2024-07-25 11:36:34.660401] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.639 Executing: test_write_invalid_db 00:11:45.639 Waiting for AER completion... 00:11:45.639 Failure: test_write_invalid_db 00:11:45.639 00:11:45.639 Executing: test_invalid_db_write_overflow_sq 00:11:45.639 Waiting for AER completion... 00:11:45.639 Failure: test_invalid_db_write_overflow_sq 00:11:45.639 00:11:45.639 Executing: test_invalid_db_write_overflow_cq 00:11:45.639 Waiting for AER completion... 00:11:45.639 Failure: test_invalid_db_write_overflow_cq 00:11:45.639 00:11:45.639 00:11:45.639 real 0m40.266s 00:11:45.639 user 0m34.094s 00:11:45.639 sys 0m5.793s 00:11:45.639 11:36:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.639 ************************************ 00:11:45.639 END TEST nvme_doorbell_aers 00:11:45.639 ************************************ 00:11:45.639 11:36:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:45.639 11:36:44 nvme -- nvme/nvme.sh@97 -- # uname 00:11:45.639 11:36:44 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:45.639 11:36:44 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:45.639 11:36:44 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:11:45.639 11:36:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.639 11:36:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:45.639 ************************************ 00:11:45.639 START TEST nvme_multi_aen 00:11:45.639 ************************************ 00:11:45.639 11:36:44 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:45.897 [2024-07-25 11:36:44.731769] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.731953] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.731985] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.734167] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.734236] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.734262] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.735977] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.736039] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.736080] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.737788] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.737853] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 [2024-07-25 11:36:44.737878] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68978) is not found. Dropping the request. 00:11:45.897 Child process pid: 69494 00:11:46.156 [Child] Asynchronous Event Request test 00:11:46.156 [Child] Attached to 0000:00:10.0 00:11:46.156 [Child] Attached to 0000:00:11.0 00:11:46.156 [Child] Attached to 0000:00:13.0 00:11:46.156 [Child] Attached to 0000:00:12.0 00:11:46.156 [Child] Registering asynchronous event callbacks... 00:11:46.156 [Child] Getting orig temperature thresholds of all controllers 00:11:46.156 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.156 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.156 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.156 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.156 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:46.156 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.156 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.156 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.156 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.156 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.156 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.156 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.156 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.156 [Child] Cleaning up... 00:11:46.156 Asynchronous Event Request test 00:11:46.156 Attached to 0000:00:10.0 00:11:46.156 Attached to 0000:00:11.0 00:11:46.156 Attached to 0000:00:13.0 00:11:46.156 Attached to 0000:00:12.0 00:11:46.156 Reset controller to setup AER completions for this process 00:11:46.156 Registering asynchronous event callbacks... 00:11:46.156 Getting orig temperature thresholds of all controllers 00:11:46.156 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.156 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.156 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.156 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.156 Setting all controllers temperature threshold low to trigger AER 00:11:46.156 Waiting for all controllers temperature threshold to be set lower 00:11:46.156 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.156 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:46.156 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.156 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:46.156 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.156 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:46.156 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.156 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:46.156 Waiting for all controllers to trigger AER and reset threshold 00:11:46.156 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.156 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.156 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.156 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.156 Cleaning up... 00:11:46.156 00:11:46.156 real 0m0.638s 00:11:46.156 user 0m0.243s 00:11:46.156 sys 0m0.279s 00:11:46.156 11:36:45 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.156 11:36:45 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:46.156 ************************************ 00:11:46.156 END TEST nvme_multi_aen 00:11:46.156 ************************************ 00:11:46.156 11:36:45 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:46.156 11:36:45 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:46.156 11:36:45 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.156 11:36:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.156 ************************************ 00:11:46.156 START TEST nvme_startup 00:11:46.156 ************************************ 00:11:46.156 11:36:45 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:46.426 Initializing NVMe Controllers 00:11:46.426 Attached to 0000:00:10.0 00:11:46.426 Attached to 0000:00:11.0 00:11:46.426 Attached to 0000:00:13.0 00:11:46.426 Attached to 0000:00:12.0 00:11:46.426 Initialization complete. 00:11:46.426 Time used:222899.844 (us). 00:11:46.426 00:11:46.426 real 0m0.327s 00:11:46.426 user 0m0.119s 00:11:46.426 sys 0m0.156s 00:11:46.426 11:36:45 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.426 11:36:45 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:46.426 ************************************ 00:11:46.426 END TEST nvme_startup 00:11:46.426 ************************************ 00:11:46.684 11:36:45 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:46.684 11:36:45 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:46.684 11:36:45 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.684 11:36:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.684 ************************************ 00:11:46.684 START TEST nvme_multi_secondary 00:11:46.684 ************************************ 00:11:46.684 11:36:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:11:46.684 11:36:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=69550 00:11:46.684 11:36:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:46.684 11:36:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=69551 00:11:46.684 11:36:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:46.684 11:36:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:50.864 Initializing NVMe Controllers 00:11:50.864 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:50.864 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:50.864 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:50.864 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:50.864 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:50.864 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:50.864 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:50.864 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:50.864 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:50.864 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:50.864 Initialization complete. Launching workers. 00:11:50.864 ======================================================== 00:11:50.864 Latency(us) 00:11:50.864 Device Information : IOPS MiB/s Average min max 00:11:50.864 PCIE (0000:00:10.0) NSID 1 from core 2: 2101.16 8.21 7611.28 1760.98 19577.36 00:11:50.864 PCIE (0000:00:11.0) NSID 1 from core 2: 2101.16 8.21 7614.65 2116.15 15946.12 00:11:50.864 PCIE (0000:00:13.0) NSID 1 from core 2: 2101.16 8.21 7614.78 1778.78 16097.71 00:11:50.864 PCIE (0000:00:12.0) NSID 1 from core 2: 2101.16 8.21 7614.98 1943.32 19436.71 00:11:50.864 PCIE (0000:00:12.0) NSID 2 from core 2: 2101.16 8.21 7624.14 1836.53 20500.07 00:11:50.864 PCIE (0000:00:12.0) NSID 3 from core 2: 2101.16 8.21 7625.29 1897.25 15476.70 00:11:50.864 ======================================================== 00:11:50.864 Total : 12606.95 49.25 7617.52 1760.98 20500.07 00:11:50.864 00:11:50.864 11:36:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 69550 00:11:50.864 Initializing NVMe Controllers 00:11:50.864 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:50.864 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:50.864 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:50.864 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:50.864 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:50.864 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:50.864 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:50.864 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:50.864 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:50.864 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:50.864 Initialization complete. Launching workers. 00:11:50.864 ======================================================== 00:11:50.864 Latency(us) 00:11:50.864 Device Information : IOPS MiB/s Average min max 00:11:50.864 PCIE (0000:00:10.0) NSID 1 from core 1: 4588.18 17.92 3484.68 1250.96 12723.52 00:11:50.864 PCIE (0000:00:11.0) NSID 1 from core 1: 4588.18 17.92 3486.35 1394.23 12728.24 00:11:50.864 PCIE (0000:00:13.0) NSID 1 from core 1: 4588.18 17.92 3486.16 1335.58 12047.39 00:11:50.864 PCIE (0000:00:12.0) NSID 1 from core 1: 4588.18 17.92 3485.94 1458.36 12100.24 00:11:50.864 PCIE (0000:00:12.0) NSID 2 from core 1: 4588.18 17.92 3485.77 1326.32 13061.37 00:11:50.864 PCIE (0000:00:12.0) NSID 3 from core 1: 4588.18 17.92 3485.62 1304.90 12792.04 00:11:50.864 ======================================================== 00:11:50.864 Total : 27529.08 107.54 3485.75 1250.96 13061.37 00:11:50.864 00:11:51.797 Initializing NVMe Controllers 00:11:51.797 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:51.797 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:51.797 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:51.797 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:51.797 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:51.797 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:51.797 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:51.797 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:51.797 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:51.797 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:51.797 Initialization complete. Launching workers. 00:11:51.797 ======================================================== 00:11:51.797 Latency(us) 00:11:51.797 Device Information : IOPS MiB/s Average min max 00:11:51.797 PCIE (0000:00:10.0) NSID 1 from core 0: 7291.51 28.48 2192.33 1007.79 7641.84 00:11:51.797 PCIE (0000:00:11.0) NSID 1 from core 0: 7291.51 28.48 2193.78 997.35 7247.39 00:11:51.797 PCIE (0000:00:13.0) NSID 1 from core 0: 7291.51 28.48 2193.69 1021.09 7229.86 00:11:51.797 PCIE (0000:00:12.0) NSID 1 from core 0: 7291.51 28.48 2193.66 1001.02 7021.50 00:11:51.797 PCIE (0000:00:12.0) NSID 2 from core 0: 7294.71 28.49 2192.63 979.01 6794.90 00:11:51.797 PCIE (0000:00:12.0) NSID 3 from core 0: 7294.71 28.49 2192.58 948.86 6902.26 00:11:51.797 ======================================================== 00:11:51.797 Total : 43755.46 170.92 2193.11 948.86 7641.84 00:11:51.797 00:11:52.055 11:36:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 69551 00:11:52.055 11:36:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=69620 00:11:52.055 11:36:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:52.055 11:36:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=69621 00:11:52.055 11:36:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:52.055 11:36:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:55.339 Initializing NVMe Controllers 00:11:55.339 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:55.339 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:55.339 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:55.339 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:55.339 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:55.339 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:55.339 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:55.339 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:55.339 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:55.339 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:55.339 Initialization complete. Launching workers. 00:11:55.339 ======================================================== 00:11:55.339 Latency(us) 00:11:55.339 Device Information : IOPS MiB/s Average min max 00:11:55.339 PCIE (0000:00:10.0) NSID 1 from core 1: 4753.12 18.57 3363.98 1060.98 13624.09 00:11:55.339 PCIE (0000:00:11.0) NSID 1 from core 1: 4753.12 18.57 3366.00 1126.04 13636.38 00:11:55.339 PCIE (0000:00:13.0) NSID 1 from core 1: 4753.12 18.57 3366.13 1142.93 13573.19 00:11:55.339 PCIE (0000:00:12.0) NSID 1 from core 1: 4753.12 18.57 3366.05 1082.31 13844.53 00:11:55.339 PCIE (0000:00:12.0) NSID 2 from core 1: 4753.12 18.57 3365.92 1097.75 13949.65 00:11:55.339 PCIE (0000:00:12.0) NSID 3 from core 1: 4753.12 18.57 3365.87 1095.89 12988.42 00:11:55.339 ======================================================== 00:11:55.339 Total : 28518.72 111.40 3365.66 1060.98 13949.65 00:11:55.339 00:11:55.598 Initializing NVMe Controllers 00:11:55.598 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:55.598 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:55.598 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:55.598 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:55.598 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:55.598 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:55.598 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:55.598 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:55.598 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:55.598 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:55.598 Initialization complete. Launching workers. 00:11:55.598 ======================================================== 00:11:55.598 Latency(us) 00:11:55.598 Device Information : IOPS MiB/s Average min max 00:11:55.598 PCIE (0000:00:10.0) NSID 1 from core 0: 4772.59 18.64 3350.27 1160.70 7638.60 00:11:55.598 PCIE (0000:00:11.0) NSID 1 from core 0: 4772.59 18.64 3351.68 1186.83 7533.62 00:11:55.598 PCIE (0000:00:13.0) NSID 1 from core 0: 4772.59 18.64 3351.50 1146.98 7816.87 00:11:55.598 PCIE (0000:00:12.0) NSID 1 from core 0: 4772.59 18.64 3351.33 1043.11 7428.91 00:11:55.598 PCIE (0000:00:12.0) NSID 2 from core 0: 4772.59 18.64 3351.15 1011.10 7411.90 00:11:55.598 PCIE (0000:00:12.0) NSID 3 from core 0: 4772.59 18.64 3351.00 940.28 7653.51 00:11:55.598 ======================================================== 00:11:55.598 Total : 28635.54 111.86 3351.16 940.28 7816.87 00:11:55.598 00:11:57.498 Initializing NVMe Controllers 00:11:57.498 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:57.498 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:57.498 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:57.498 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:57.498 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:57.498 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:57.498 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:57.498 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:57.498 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:57.498 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:57.498 Initialization complete. Launching workers. 00:11:57.498 ======================================================== 00:11:57.498 Latency(us) 00:11:57.498 Device Information : IOPS MiB/s Average min max 00:11:57.498 PCIE (0000:00:10.0) NSID 1 from core 2: 3180.17 12.42 5027.95 1161.52 17183.82 00:11:57.498 PCIE (0000:00:11.0) NSID 1 from core 2: 3180.17 12.42 5030.14 1153.67 17110.68 00:11:57.498 PCIE (0000:00:13.0) NSID 1 from core 2: 3180.17 12.42 5030.50 1047.41 16903.06 00:11:57.498 PCIE (0000:00:12.0) NSID 1 from core 2: 3180.17 12.42 5030.14 1014.93 19132.26 00:11:57.498 PCIE (0000:00:12.0) NSID 2 from core 2: 3180.17 12.42 5029.77 925.75 18673.62 00:11:57.498 PCIE (0000:00:12.0) NSID 3 from core 2: 3180.17 12.42 5029.90 848.86 18554.76 00:11:57.498 ======================================================== 00:11:57.498 Total : 19081.01 74.54 5029.73 848.86 19132.26 00:11:57.498 00:11:57.498 ************************************ 00:11:57.498 END TEST nvme_multi_secondary 00:11:57.498 ************************************ 00:11:57.498 11:36:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 69620 00:11:57.498 11:36:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 69621 00:11:57.498 00:11:57.498 real 0m10.772s 00:11:57.498 user 0m18.595s 00:11:57.498 sys 0m1.048s 00:11:57.498 11:36:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.498 11:36:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:57.498 11:36:56 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:57.498 11:36:56 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:57.498 11:36:56 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/68555 ]] 00:11:57.498 11:36:56 nvme -- common/autotest_common.sh@1090 -- # kill 68555 00:11:57.498 11:36:56 nvme -- common/autotest_common.sh@1091 -- # wait 68555 00:11:57.498 [2024-07-25 11:36:56.340299] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.340458] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.340514] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.340564] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.343907] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.343974] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.344000] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.344045] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.346187] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.346245] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.346271] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.346294] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.348405] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.348466] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.348492] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.498 [2024-07-25 11:36:56.348515] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69493) is not found. Dropping the request. 00:11:57.757 [2024-07-25 11:36:56.613015] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:11:57.757 11:36:56 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:11:57.757 11:36:56 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:11:57.757 11:36:56 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:57.757 11:36:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:57.757 11:36:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.757 11:36:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:57.757 ************************************ 00:11:57.757 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:57.757 ************************************ 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:57.757 * Looking for test storage... 00:11:57.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:11:57.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69776 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69776 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 69776 ']' 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.757 11:36:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:58.015 [2024-07-25 11:36:56.931908] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:58.015 [2024-07-25 11:36:56.932378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69776 ] 00:11:58.274 [2024-07-25 11:36:57.139547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.548 [2024-07-25 11:36:57.496906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.548 [2024-07-25 11:36:57.496978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.548 [2024-07-25 11:36:57.497049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.548 [2024-07-25 11:36:57.497057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:59.515 nvme0n1 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_LlROw.txt 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:59.515 true 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721907418 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69799 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:59.515 11:36:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:01.416 [2024-07-25 11:37:00.421536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:01.416 [2024-07-25 11:37:00.422106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:01.416 [2024-07-25 11:37:00.422272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:01.416 [2024-07-25 11:37:00.422415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.416 [2024-07-25 11:37:00.424635] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:01.416 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69799 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69799 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69799 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.416 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_LlROw.txt 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:01.674 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_LlROw.txt 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69776 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 69776 ']' 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 69776 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69776 00:12:01.675 killing process with pid 69776 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69776' 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 69776 00:12:01.675 11:37:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 69776 00:12:04.202 11:37:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:04.202 11:37:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:04.202 00:12:04.202 real 0m6.251s 00:12:04.202 user 0m21.051s 00:12:04.202 sys 0m0.778s 00:12:04.202 11:37:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.202 11:37:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:04.202 ************************************ 00:12:04.202 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:04.202 ************************************ 00:12:04.202 11:37:02 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:04.202 11:37:02 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:04.202 11:37:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:04.202 11:37:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.202 11:37:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.202 ************************************ 00:12:04.202 START TEST nvme_fio 00:12:04.202 ************************************ 00:12:04.202 11:37:02 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:12:04.202 11:37:02 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:04.202 11:37:02 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:04.202 11:37:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:04.202 11:37:02 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:04.202 11:37:02 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:12:04.202 11:37:02 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:04.202 11:37:02 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:04.202 11:37:02 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:04.202 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:04.202 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:04.202 11:37:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:04.202 11:37:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:04.202 11:37:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:04.202 11:37:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:04.202 11:37:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:04.459 11:37:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:04.460 11:37:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:04.718 11:37:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:04.718 11:37:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:04.718 11:37:03 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:04.975 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:04.975 fio-3.35 00:12:04.975 Starting 1 thread 00:12:08.258 00:12:08.258 test: (groupid=0, jobs=1): err= 0: pid=69956: Thu Jul 25 11:37:06 2024 00:12:08.258 read: IOPS=14.2k, BW=55.5MiB/s (58.2MB/s)(111MiB/2001msec) 00:12:08.258 slat (usec): min=4, max=189, avg= 7.33, stdev= 2.24 00:12:08.258 clat (usec): min=249, max=9575, avg=4482.41, stdev=541.75 00:12:08.258 lat (usec): min=255, max=9764, avg=4489.74, stdev=542.56 00:12:08.258 clat percentiles (usec): 00:12:08.258 | 1.00th=[ 3392], 5.00th=[ 3785], 10.00th=[ 3851], 20.00th=[ 3949], 00:12:08.258 | 30.00th=[ 4080], 40.00th=[ 4359], 50.00th=[ 4686], 60.00th=[ 4752], 00:12:08.258 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 4948], 95.00th=[ 5080], 00:12:08.258 | 99.00th=[ 5604], 99.50th=[ 7046], 99.90th=[ 8225], 99.95th=[ 8356], 00:12:08.258 | 99.99th=[ 9503] 00:12:08.258 bw ( KiB/s): min=50856, max=57800, per=95.62%, avg=54320.00, stdev=3472.03, samples=3 00:12:08.258 iops : min=12714, max=14450, avg=13580.00, stdev=868.01, samples=3 00:12:08.258 write: IOPS=14.2k, BW=55.5MiB/s (58.2MB/s)(111MiB/2001msec); 0 zone resets 00:12:08.258 slat (nsec): min=5042, max=54046, avg=7601.92, stdev=1909.78 00:12:08.258 clat (usec): min=278, max=9412, avg=4490.17, stdev=544.40 00:12:08.258 lat (usec): min=284, max=9432, avg=4497.77, stdev=545.16 00:12:08.258 clat percentiles (usec): 00:12:08.258 | 1.00th=[ 3425], 5.00th=[ 3785], 10.00th=[ 3851], 20.00th=[ 3982], 00:12:08.258 | 30.00th=[ 4080], 40.00th=[ 4359], 50.00th=[ 4686], 60.00th=[ 4752], 00:12:08.258 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 4948], 95.00th=[ 5080], 00:12:08.258 | 99.00th=[ 5800], 99.50th=[ 7242], 99.90th=[ 8160], 99.95th=[ 8291], 00:12:08.258 | 99.99th=[ 9241] 00:12:08.258 bw ( KiB/s): min=51136, max=57504, per=95.70%, avg=54386.67, stdev=3186.09, samples=3 00:12:08.258 iops : min=12784, max=14376, avg=13596.67, stdev=796.52, samples=3 00:12:08.258 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:12:08.258 lat (msec) : 2=0.05%, 4=23.66%, 10=76.24% 00:12:08.258 cpu : usr=98.90%, sys=0.10%, ctx=15, majf=0, minf=607 00:12:08.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:08.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.258 issued rwts: total=28417,28428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.258 00:12:08.258 Run status group 0 (all jobs): 00:12:08.258 READ: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=111MiB (116MB), run=2001-2001msec 00:12:08.258 WRITE: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=111MiB (116MB), run=2001-2001msec 00:12:08.258 ----------------------------------------------------- 00:12:08.258 Suppressions used: 00:12:08.258 count bytes template 00:12:08.258 1 32 /usr/src/fio/parse.c 00:12:08.258 1 8 libtcmalloc_minimal.so 00:12:08.258 ----------------------------------------------------- 00:12:08.258 00:12:08.258 11:37:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:08.258 11:37:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:08.258 11:37:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:08.258 11:37:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:08.516 11:37:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:08.516 11:37:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:08.775 11:37:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:08.775 11:37:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:08.775 11:37:07 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:08.775 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:08.775 fio-3.35 00:12:08.775 Starting 1 thread 00:12:12.117 00:12:12.118 test: (groupid=0, jobs=1): err= 0: pid=70022: Thu Jul 25 11:37:11 2024 00:12:12.118 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(115MiB/2001msec) 00:12:12.118 slat (nsec): min=4848, max=52863, avg=7439.62, stdev=2200.43 00:12:12.118 clat (usec): min=295, max=9923, avg=4333.92, stdev=529.46 00:12:12.118 lat (usec): min=302, max=9976, avg=4341.36, stdev=530.32 00:12:12.118 clat percentiles (usec): 00:12:12.118 | 1.00th=[ 3195], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3916], 00:12:12.118 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:12:12.118 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5014], 95.00th=[ 5211], 00:12:12.118 | 99.00th=[ 6063], 99.50th=[ 6521], 99.90th=[ 7832], 99.95th=[ 8356], 00:12:12.118 | 99.99th=[ 9896] 00:12:12.118 bw ( KiB/s): min=54656, max=59552, per=96.59%, avg=56765.33, stdev=2517.30, samples=3 00:12:12.118 iops : min=13664, max=14888, avg=14191.33, stdev=629.32, samples=3 00:12:12.118 write: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(115MiB/2001msec); 0 zone resets 00:12:12.118 slat (nsec): min=5002, max=58605, avg=7583.13, stdev=2180.12 00:12:12.118 clat (usec): min=240, max=9810, avg=4337.99, stdev=527.07 00:12:12.118 lat (usec): min=246, max=9852, avg=4345.57, stdev=527.90 00:12:12.118 clat percentiles (usec): 00:12:12.118 | 1.00th=[ 3261], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3916], 00:12:12.118 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:12:12.118 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5014], 95.00th=[ 5211], 00:12:12.118 | 99.00th=[ 5932], 99.50th=[ 6521], 99.90th=[ 7832], 99.95th=[ 8586], 00:12:12.118 | 99.99th=[ 9634] 00:12:12.118 bw ( KiB/s): min=54976, max=59168, per=96.45%, avg=56754.67, stdev=2166.87, samples=3 00:12:12.118 iops : min=13744, max=14792, avg=14188.67, stdev=541.72, samples=3 00:12:12.118 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:12:12.118 lat (msec) : 2=0.06%, 4=24.04%, 10=75.86% 00:12:12.118 cpu : usr=98.90%, sys=0.10%, ctx=5, majf=0, minf=606 00:12:12.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:12.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:12.118 issued rwts: total=29398,29435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:12.118 00:12:12.118 Run status group 0 (all jobs): 00:12:12.118 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=115MiB (120MB), run=2001-2001msec 00:12:12.118 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=115MiB (121MB), run=2001-2001msec 00:12:12.376 ----------------------------------------------------- 00:12:12.376 Suppressions used: 00:12:12.376 count bytes template 00:12:12.376 1 32 /usr/src/fio/parse.c 00:12:12.376 1 8 libtcmalloc_minimal.so 00:12:12.376 ----------------------------------------------------- 00:12:12.376 00:12:12.376 11:37:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:12.376 11:37:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:12.376 11:37:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:12.376 11:37:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:12.635 11:37:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:12.635 11:37:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:13.202 11:37:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:13.202 11:37:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:13.202 11:37:11 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:13.202 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:13.202 fio-3.35 00:12:13.202 Starting 1 thread 00:12:17.392 00:12:17.392 test: (groupid=0, jobs=1): err= 0: pid=70083: Thu Jul 25 11:37:16 2024 00:12:17.392 read: IOPS=14.9k, BW=58.1MiB/s (60.9MB/s)(116MiB/2001msec) 00:12:17.392 slat (nsec): min=4773, max=56512, avg=7193.92, stdev=2332.77 00:12:17.392 clat (usec): min=316, max=9969, avg=4283.94, stdev=751.29 00:12:17.392 lat (usec): min=321, max=10025, avg=4291.13, stdev=752.28 00:12:17.392 clat percentiles (usec): 00:12:17.392 | 1.00th=[ 3130], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3687], 00:12:17.392 | 30.00th=[ 3949], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:12:17.392 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5604], 00:12:17.392 | 99.00th=[ 7308], 99.50th=[ 7963], 99.90th=[ 8586], 99.95th=[ 8848], 00:12:17.392 | 99.99th=[ 9896] 00:12:17.392 bw ( KiB/s): min=59024, max=65616, per=100.00%, avg=61594.67, stdev=3527.31, samples=3 00:12:17.392 iops : min=14756, max=16404, avg=15398.67, stdev=881.83, samples=3 00:12:17.392 write: IOPS=14.9k, BW=58.1MiB/s (60.9MB/s)(116MiB/2001msec); 0 zone resets 00:12:17.392 slat (nsec): min=4825, max=63572, avg=7354.80, stdev=2281.19 00:12:17.392 clat (usec): min=240, max=9802, avg=4289.71, stdev=748.90 00:12:17.392 lat (usec): min=246, max=9820, avg=4297.07, stdev=749.83 00:12:17.392 clat percentiles (usec): 00:12:17.392 | 1.00th=[ 3130], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3687], 00:12:17.392 | 30.00th=[ 3949], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:12:17.392 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5604], 00:12:17.392 | 99.00th=[ 7242], 99.50th=[ 7898], 99.90th=[ 8586], 99.95th=[ 8848], 00:12:17.392 | 99.99th=[ 9634] 00:12:17.392 bw ( KiB/s): min=58160, max=64784, per=100.00%, avg=61176.00, stdev=3351.45, samples=3 00:12:17.392 iops : min=14540, max=16196, avg=15294.00, stdev=837.86, samples=3 00:12:17.392 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:12:17.392 lat (msec) : 2=0.06%, 4=30.97%, 10=68.92% 00:12:17.392 cpu : usr=98.90%, sys=0.10%, ctx=4, majf=0, minf=607 00:12:17.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:17.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.392 issued rwts: total=29746,29758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.392 00:12:17.392 Run status group 0 (all jobs): 00:12:17.392 READ: bw=58.1MiB/s (60.9MB/s), 58.1MiB/s-58.1MiB/s (60.9MB/s-60.9MB/s), io=116MiB (122MB), run=2001-2001msec 00:12:17.392 WRITE: bw=58.1MiB/s (60.9MB/s), 58.1MiB/s-58.1MiB/s (60.9MB/s-60.9MB/s), io=116MiB (122MB), run=2001-2001msec 00:12:17.392 ----------------------------------------------------- 00:12:17.392 Suppressions used: 00:12:17.392 count bytes template 00:12:17.392 1 32 /usr/src/fio/parse.c 00:12:17.392 1 8 libtcmalloc_minimal.so 00:12:17.392 ----------------------------------------------------- 00:12:17.392 00:12:17.392 11:37:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:17.392 11:37:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:17.392 11:37:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:17.392 11:37:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:17.650 11:37:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:17.650 11:37:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:17.908 11:37:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:17.908 11:37:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:17.908 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:18.166 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:18.166 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:18.166 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:18.166 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:18.166 11:37:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:18.166 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:18.166 fio-3.35 00:12:18.166 Starting 1 thread 00:12:23.442 00:12:23.442 test: (groupid=0, jobs=1): err= 0: pid=70149: Thu Jul 25 11:37:21 2024 00:12:23.443 read: IOPS=16.4k, BW=63.9MiB/s (67.0MB/s)(128MiB/2001msec) 00:12:23.443 slat (nsec): min=4679, max=79246, avg=6561.10, stdev=1907.02 00:12:23.443 clat (usec): min=322, max=10270, avg=3888.01, stdev=554.89 00:12:23.443 lat (usec): min=329, max=10349, avg=3894.57, stdev=555.64 00:12:23.443 clat percentiles (usec): 00:12:23.443 | 1.00th=[ 2933], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3425], 00:12:23.443 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3916], 60.00th=[ 4113], 00:12:23.443 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4490], 00:12:23.443 | 99.00th=[ 5866], 99.50th=[ 6652], 99.90th=[ 7832], 99.95th=[ 8979], 00:12:23.443 | 99.99th=[ 9896] 00:12:23.443 bw ( KiB/s): min=64064, max=71632, per=100.00%, avg=66970.67, stdev=4077.72, samples=3 00:12:23.443 iops : min=16016, max=17908, avg=16742.67, stdev=1019.43, samples=3 00:12:23.443 write: IOPS=16.4k, BW=64.0MiB/s (67.1MB/s)(128MiB/2001msec); 0 zone resets 00:12:23.443 slat (nsec): min=4615, max=43474, avg=6646.59, stdev=1856.83 00:12:23.443 clat (usec): min=401, max=9903, avg=3896.53, stdev=553.16 00:12:23.443 lat (usec): min=408, max=9921, avg=3903.18, stdev=553.88 00:12:23.443 clat percentiles (usec): 00:12:23.443 | 1.00th=[ 2966], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3458], 00:12:23.443 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3949], 60.00th=[ 4113], 00:12:23.443 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4490], 00:12:23.443 | 99.00th=[ 5800], 99.50th=[ 6718], 99.90th=[ 8029], 99.95th=[ 9110], 00:12:23.443 | 99.99th=[ 9765] 00:12:23.443 bw ( KiB/s): min=64168, max=71496, per=100.00%, avg=66928.00, stdev=3984.54, samples=3 00:12:23.443 iops : min=16042, max=17874, avg=16732.00, stdev=996.13, samples=3 00:12:23.443 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:23.443 lat (msec) : 2=0.04%, 4=51.52%, 10=48.41%, 20=0.01% 00:12:23.443 cpu : usr=98.95%, sys=0.10%, ctx=4, majf=0, minf=605 00:12:23.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:23.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.443 issued rwts: total=32740,32802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.443 00:12:23.443 Run status group 0 (all jobs): 00:12:23.443 READ: bw=63.9MiB/s (67.0MB/s), 63.9MiB/s-63.9MiB/s (67.0MB/s-67.0MB/s), io=128MiB (134MB), run=2001-2001msec 00:12:23.443 WRITE: bw=64.0MiB/s (67.1MB/s), 64.0MiB/s-64.0MiB/s (67.1MB/s-67.1MB/s), io=128MiB (134MB), run=2001-2001msec 00:12:23.443 ----------------------------------------------------- 00:12:23.443 Suppressions used: 00:12:23.443 count bytes template 00:12:23.443 1 32 /usr/src/fio/parse.c 00:12:23.443 1 8 libtcmalloc_minimal.so 00:12:23.443 ----------------------------------------------------- 00:12:23.443 00:12:23.443 ************************************ 00:12:23.443 END TEST nvme_fio 00:12:23.443 ************************************ 00:12:23.443 11:37:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:23.443 11:37:21 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:23.443 00:12:23.443 real 0m18.777s 00:12:23.443 user 0m14.761s 00:12:23.443 sys 0m2.895s 00:12:23.443 11:37:21 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.443 11:37:21 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:23.443 ************************************ 00:12:23.443 END TEST nvme 00:12:23.443 ************************************ 00:12:23.443 00:12:23.443 real 1m33.201s 00:12:23.443 user 3m46.168s 00:12:23.443 sys 0m16.244s 00:12:23.443 11:37:21 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.443 11:37:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.443 11:37:21 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:12:23.443 11:37:21 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:23.443 11:37:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:23.443 11:37:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:23.443 11:37:21 -- common/autotest_common.sh@10 -- # set +x 00:12:23.443 ************************************ 00:12:23.443 START TEST nvme_scc 00:12:23.443 ************************************ 00:12:23.443 11:37:21 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:23.443 * Looking for test storage... 00:12:23.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:23.443 11:37:21 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:23.443 11:37:21 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.443 11:37:21 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.443 11:37:21 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.443 11:37:21 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.443 11:37:21 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.443 11:37:21 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.443 11:37:21 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:23.443 11:37:21 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:23.443 11:37:21 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:23.443 11:37:21 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:23.443 11:37:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:23.443 11:37:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:23.443 11:37:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:23.443 11:37:21 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:23.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:23.443 Waiting for block devices as requested 00:12:23.701 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.701 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.701 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.959 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.249 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:29.249 11:37:27 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:29.249 11:37:27 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:29.249 11:37:27 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:29.249 11:37:27 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:29.249 11:37:27 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.249 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.250 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:29.251 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:29.252 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.253 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:29.254 11:37:27 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:29.254 11:37:27 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:29.254 11:37:27 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:29.254 11:37:27 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:29.254 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.255 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.256 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:29.257 11:37:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:29.257 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.258 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:29.259 11:37:28 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:29.259 11:37:28 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:29.259 11:37:28 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:29.259 11:37:28 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.259 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.260 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:29.261 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.262 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.263 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:29.264 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:29.265 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:29.266 11:37:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:29.266 11:37:28 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:29.267 11:37:28 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:29.267 11:37:28 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:29.267 11:37:28 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.267 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.268 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:29.269 11:37:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:29.269 11:37:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:12:29.270 11:37:28 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:12:29.529 11:37:28 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:12:29.529 11:37:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:29.529 11:37:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:29.529 11:37:28 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:29.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:30.721 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.721 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.721 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.721 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.721 11:37:29 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:30.721 11:37:29 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:30.721 11:37:29 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.721 11:37:29 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:30.721 ************************************ 00:12:30.721 START TEST nvme_simple_copy 00:12:30.721 ************************************ 00:12:30.721 11:37:29 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:30.979 Initializing NVMe Controllers 00:12:30.979 Attaching to 0000:00:10.0 00:12:30.979 Controller supports SCC. Attached to 0000:00:10.0 00:12:30.979 Namespace ID: 1 size: 6GB 00:12:30.979 Initialization complete. 00:12:30.979 00:12:30.979 Controller QEMU NVMe Ctrl (12340 ) 00:12:30.979 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:30.979 Namespace Block Size:4096 00:12:30.979 Writing LBAs 0 to 63 with Random Data 00:12:30.979 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:30.979 LBAs matching Written Data: 64 00:12:30.979 00:12:30.979 real 0m0.338s 00:12:30.979 user 0m0.133s 00:12:30.979 sys 0m0.102s 00:12:30.979 11:37:29 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.979 ************************************ 00:12:30.979 END TEST nvme_simple_copy 00:12:30.979 11:37:29 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:30.979 ************************************ 00:12:30.979 00:12:30.979 real 0m8.167s 00:12:30.979 user 0m1.357s 00:12:30.979 sys 0m1.774s 00:12:30.979 11:37:29 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.979 11:37:29 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:30.979 ************************************ 00:12:30.979 END TEST nvme_scc 00:12:30.979 ************************************ 00:12:30.979 11:37:30 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:12:30.979 11:37:30 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:12:30.979 11:37:30 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:12:30.979 11:37:30 -- spdk/autotest.sh@236 -- # [[ 1 -eq 1 ]] 00:12:30.979 11:37:30 -- spdk/autotest.sh@237 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:30.979 11:37:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:30.979 11:37:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.979 11:37:30 -- common/autotest_common.sh@10 -- # set +x 00:12:31.237 ************************************ 00:12:31.237 START TEST nvme_fdp 00:12:31.237 ************************************ 00:12:31.237 11:37:30 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:12:31.237 * Looking for test storage... 00:12:31.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:31.237 11:37:30 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.237 11:37:30 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.237 11:37:30 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.237 11:37:30 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.237 11:37:30 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.237 11:37:30 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.237 11:37:30 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.237 11:37:30 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:31.237 11:37:30 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:31.237 11:37:30 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:31.237 11:37:30 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.237 11:37:30 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:31.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:31.753 Waiting for block devices as requested 00:12:31.753 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:31.753 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:32.010 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:32.010 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:37.326 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:37.326 11:37:36 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:37.326 11:37:36 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:37.326 11:37:36 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:37.326 11:37:36 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:37.326 11:37:36 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:37.326 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:37.327 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.328 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:37.329 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.330 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:37.331 11:37:36 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:37.331 11:37:36 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:37.331 11:37:36 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:37.331 11:37:36 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:37.331 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.332 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.333 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:37.334 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.335 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:37.336 11:37:36 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:37.336 11:37:36 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:37.336 11:37:36 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:37.336 11:37:36 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.336 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.337 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:37.338 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.339 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.340 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:37.341 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.342 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:37.343 11:37:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:37.344 11:37:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:37.344 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.344 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.604 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:37.605 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:37.606 11:37:36 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:37.606 11:37:36 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:37.606 11:37:36 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:37.606 11:37:36 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:37.606 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:37.607 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:37.608 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:37.609 11:37:36 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:12:37.609 11:37:36 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:12:37.609 11:37:36 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:37.609 11:37:36 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:37.609 11:37:36 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:38.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:38.743 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:38.743 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:38.743 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:38.743 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:38.743 11:37:37 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:38.743 11:37:37 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:38.743 11:37:37 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.743 11:37:37 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:38.743 ************************************ 00:12:38.743 START TEST nvme_flexible_data_placement 00:12:38.743 ************************************ 00:12:38.743 11:37:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:39.002 Initializing NVMe Controllers 00:12:39.002 Attaching to 0000:00:13.0 00:12:39.002 Controller supports FDP Attached to 0000:00:13.0 00:12:39.002 Namespace ID: 1 Endurance Group ID: 1 00:12:39.002 Initialization complete. 00:12:39.002 00:12:39.002 ================================== 00:12:39.002 == FDP tests for Namespace: #01 == 00:12:39.002 ================================== 00:12:39.002 00:12:39.002 Get Feature: FDP: 00:12:39.002 ================= 00:12:39.002 Enabled: Yes 00:12:39.002 FDP configuration Index: 0 00:12:39.002 00:12:39.002 FDP configurations log page 00:12:39.002 =========================== 00:12:39.002 Number of FDP configurations: 1 00:12:39.002 Version: 0 00:12:39.002 Size: 112 00:12:39.002 FDP Configuration Descriptor: 0 00:12:39.002 Descriptor Size: 96 00:12:39.002 Reclaim Group Identifier format: 2 00:12:39.002 FDP Volatile Write Cache: Not Present 00:12:39.002 FDP Configuration: Valid 00:12:39.002 Vendor Specific Size: 0 00:12:39.002 Number of Reclaim Groups: 2 00:12:39.002 Number of Recalim Unit Handles: 8 00:12:39.002 Max Placement Identifiers: 128 00:12:39.002 Number of Namespaces Suppprted: 256 00:12:39.002 Reclaim unit Nominal Size: 6000000 bytes 00:12:39.002 Estimated Reclaim Unit Time Limit: Not Reported 00:12:39.002 RUH Desc #000: RUH Type: Initially Isolated 00:12:39.002 RUH Desc #001: RUH Type: Initially Isolated 00:12:39.002 RUH Desc #002: RUH Type: Initially Isolated 00:12:39.002 RUH Desc #003: RUH Type: Initially Isolated 00:12:39.002 RUH Desc #004: RUH Type: Initially Isolated 00:12:39.002 RUH Desc #005: RUH Type: Initially Isolated 00:12:39.002 RUH Desc #006: RUH Type: Initially Isolated 00:12:39.002 RUH Desc #007: RUH Type: Initially Isolated 00:12:39.002 00:12:39.002 FDP reclaim unit handle usage log page 00:12:39.002 ====================================== 00:12:39.002 Number of Reclaim Unit Handles: 8 00:12:39.002 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:39.002 RUH Usage Desc #001: RUH Attributes: Unused 00:12:39.002 RUH Usage Desc #002: RUH Attributes: Unused 00:12:39.002 RUH Usage Desc #003: RUH Attributes: Unused 00:12:39.002 RUH Usage Desc #004: RUH Attributes: Unused 00:12:39.002 RUH Usage Desc #005: RUH Attributes: Unused 00:12:39.002 RUH Usage Desc #006: RUH Attributes: Unused 00:12:39.002 RUH Usage Desc #007: RUH Attributes: Unused 00:12:39.002 00:12:39.002 FDP statistics log page 00:12:39.002 ======================= 00:12:39.002 Host bytes with metadata written: 810930176 00:12:39.002 Media bytes with metadata written: 811016192 00:12:39.002 Media bytes erased: 0 00:12:39.002 00:12:39.002 FDP Reclaim unit handle status 00:12:39.002 ============================== 00:12:39.002 Number of RUHS descriptors: 2 00:12:39.002 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005aa3 00:12:39.002 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:39.002 00:12:39.002 FDP write on placement id: 0 success 00:12:39.002 00:12:39.002 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:39.002 00:12:39.002 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:39.002 00:12:39.002 Get Feature: FDP Events for Placement handle: #0 00:12:39.002 ======================== 00:12:39.002 Number of FDP Events: 6 00:12:39.002 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:39.002 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:39.002 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:39.002 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:39.002 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:39.002 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:39.002 00:12:39.002 FDP events log page 00:12:39.002 =================== 00:12:39.002 Number of FDP events: 1 00:12:39.002 FDP Event #0: 00:12:39.002 Event Type: RU Not Written to Capacity 00:12:39.002 Placement Identifier: Valid 00:12:39.002 NSID: Valid 00:12:39.002 Location: Valid 00:12:39.002 Placement Identifier: 0 00:12:39.002 Event Timestamp: 8 00:12:39.002 Namespace Identifier: 1 00:12:39.002 Reclaim Group Identifier: 0 00:12:39.002 Reclaim Unit Handle Identifier: 0 00:12:39.002 00:12:39.002 FDP test passed 00:12:39.002 00:12:39.002 real 0m0.305s 00:12:39.002 user 0m0.101s 00:12:39.002 sys 0m0.101s 00:12:39.002 11:37:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.002 ************************************ 00:12:39.002 11:37:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:39.002 END TEST nvme_flexible_data_placement 00:12:39.002 ************************************ 00:12:39.002 00:12:39.002 real 0m8.006s 00:12:39.002 user 0m1.297s 00:12:39.002 sys 0m1.699s 00:12:39.002 11:37:38 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.002 11:37:38 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:39.002 ************************************ 00:12:39.002 END TEST nvme_fdp 00:12:39.002 ************************************ 00:12:39.261 11:37:38 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:12:39.261 11:37:38 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:39.261 11:37:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:39.261 11:37:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.261 11:37:38 -- common/autotest_common.sh@10 -- # set +x 00:12:39.261 ************************************ 00:12:39.261 START TEST nvme_rpc 00:12:39.261 ************************************ 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:39.261 * Looking for test storage... 00:12:39.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:39.261 11:37:38 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.261 11:37:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:12:39.261 11:37:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:39.261 11:37:38 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=71497 00:12:39.261 11:37:38 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:39.261 11:37:38 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:39.261 11:37:38 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 71497 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 71497 ']' 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:39.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:39.261 11:37:38 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.519 [2024-07-25 11:37:38.377053] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:39.519 [2024-07-25 11:37:38.377249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71497 ] 00:12:39.519 [2024-07-25 11:37:38.555243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:40.085 [2024-07-25 11:37:38.834482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.085 [2024-07-25 11:37:38.834482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.652 11:37:39 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.652 11:37:39 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:40.652 11:37:39 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:41.216 Nvme0n1 00:12:41.216 11:37:40 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:41.216 11:37:40 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:41.216 request: 00:12:41.216 { 00:12:41.216 "bdev_name": "Nvme0n1", 00:12:41.216 "filename": "non_existing_file", 00:12:41.216 "method": "bdev_nvme_apply_firmware", 00:12:41.216 "req_id": 1 00:12:41.216 } 00:12:41.216 Got JSON-RPC error response 00:12:41.216 response: 00:12:41.216 { 00:12:41.216 "code": -32603, 00:12:41.216 "message": "open file failed." 00:12:41.216 } 00:12:41.216 11:37:40 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:41.216 11:37:40 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:41.216 11:37:40 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:41.472 11:37:40 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:41.473 11:37:40 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 71497 00:12:41.473 11:37:40 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 71497 ']' 00:12:41.473 11:37:40 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 71497 00:12:41.473 11:37:40 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:12:41.473 11:37:40 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:41.473 11:37:40 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71497 00:12:41.473 11:37:40 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:41.473 11:37:40 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:41.473 11:37:40 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71497' 00:12:41.473 killing process with pid 71497 00:12:41.473 11:37:40 nvme_rpc -- common/autotest_common.sh@969 -- # kill 71497 00:12:41.473 11:37:40 nvme_rpc -- common/autotest_common.sh@974 -- # wait 71497 00:12:44.076 00:12:44.076 real 0m4.634s 00:12:44.076 user 0m8.565s 00:12:44.076 sys 0m0.741s 00:12:44.076 11:37:42 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.076 ************************************ 00:12:44.076 11:37:42 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.076 END TEST nvme_rpc 00:12:44.076 ************************************ 00:12:44.076 11:37:42 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:44.076 11:37:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:44.076 11:37:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.076 11:37:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.076 ************************************ 00:12:44.076 START TEST nvme_rpc_timeouts 00:12:44.076 ************************************ 00:12:44.076 11:37:42 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:44.076 * Looking for test storage... 00:12:44.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:44.076 11:37:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.076 11:37:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_71573 00:12:44.076 11:37:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_71573 00:12:44.076 11:37:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=71598 00:12:44.076 11:37:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:44.076 11:37:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:44.077 11:37:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 71598 00:12:44.077 11:37:42 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 71598 ']' 00:12:44.077 11:37:42 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.077 11:37:42 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.077 11:37:42 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.077 11:37:42 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.077 11:37:42 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:44.077 [2024-07-25 11:37:42.988372] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:44.077 [2024-07-25 11:37:42.988570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71598 ] 00:12:44.335 [2024-07-25 11:37:43.163741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:44.593 [2024-07-25 11:37:43.405537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.593 [2024-07-25 11:37:43.405537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.525 11:37:44 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.525 Checking default timeout settings: 00:12:45.525 11:37:44 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:12:45.525 11:37:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:45.525 11:37:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:45.782 Making settings changes with rpc: 00:12:45.782 11:37:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:45.782 11:37:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:46.040 Check default vs. modified settings: 00:12:46.040 11:37:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:46.040 11:37:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_71573 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_71573 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:46.350 Setting action_on_timeout is changed as expected. 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_71573 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_71573 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:46.350 Setting timeout_us is changed as expected. 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_71573 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_71573 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:46.350 Setting timeout_admin_us is changed as expected. 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_71573 /tmp/settings_modified_71573 00:12:46.350 11:37:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 71598 00:12:46.350 11:37:45 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 71598 ']' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 71598 00:12:46.350 11:37:45 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:12:46.350 11:37:45 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71598 00:12:46.350 killing process with pid 71598 00:12:46.350 11:37:45 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:46.350 11:37:45 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71598' 00:12:46.350 11:37:45 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 71598 00:12:46.350 11:37:45 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 71598 00:12:48.876 RPC TIMEOUT SETTING TEST PASSED. 00:12:48.876 11:37:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:48.876 ************************************ 00:12:48.876 END TEST nvme_rpc_timeouts 00:12:48.876 ************************************ 00:12:48.876 00:12:48.876 real 0m4.733s 00:12:48.876 user 0m8.819s 00:12:48.876 sys 0m0.728s 00:12:48.876 11:37:47 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:48.876 11:37:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:48.876 11:37:47 -- spdk/autotest.sh@247 -- # uname -s 00:12:48.876 11:37:47 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:12:48.876 11:37:47 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:48.876 11:37:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:48.876 11:37:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.876 11:37:47 -- common/autotest_common.sh@10 -- # set +x 00:12:48.876 ************************************ 00:12:48.876 START TEST sw_hotplug 00:12:48.876 ************************************ 00:12:48.876 11:37:47 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:48.876 * Looking for test storage... 00:12:48.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:48.876 11:37:47 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:49.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:49.133 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:49.133 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:49.133 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:49.133 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:49.392 11:37:48 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:49.392 11:37:48 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:49.392 11:37:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:49.392 11:37:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@230 -- # local class 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:49.392 11:37:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:12:49.393 11:37:48 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:49.393 11:37:48 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:49.393 11:37:48 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:49.393 11:37:48 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:49.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:49.908 Waiting for block devices as requested 00:12:49.908 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.908 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.166 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.166 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:55.427 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:55.427 11:37:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:55.427 11:37:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:55.685 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:55.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:55.685 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:55.943 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:56.201 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:56.201 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:56.459 11:37:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=72464 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:56.459 11:37:55 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:56.459 11:37:55 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:56.459 11:37:55 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:56.459 11:37:55 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:56.459 11:37:55 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:56.459 11:37:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:56.717 Initializing NVMe Controllers 00:12:56.717 Attaching to 0000:00:10.0 00:12:56.717 Attaching to 0000:00:11.0 00:12:56.717 Attached to 0000:00:10.0 00:12:56.717 Attached to 0000:00:11.0 00:12:56.717 Initialization complete. Starting I/O... 00:12:56.717 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:56.717 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:56.717 00:12:57.653 QEMU NVMe Ctrl (12340 ): 1082 I/Os completed (+1082) 00:12:57.653 QEMU NVMe Ctrl (12341 ): 1112 I/Os completed (+1112) 00:12:57.653 00:12:59.028 QEMU NVMe Ctrl (12340 ): 2274 I/Os completed (+1192) 00:12:59.028 QEMU NVMe Ctrl (12341 ): 2409 I/Os completed (+1297) 00:12:59.028 00:12:59.594 QEMU NVMe Ctrl (12340 ): 3659 I/Os completed (+1385) 00:12:59.595 QEMU NVMe Ctrl (12341 ): 3967 I/Os completed (+1558) 00:12:59.595 00:13:00.967 QEMU NVMe Ctrl (12340 ): 4979 I/Os completed (+1320) 00:13:00.967 QEMU NVMe Ctrl (12341 ): 5441 I/Os completed (+1474) 00:13:00.967 00:13:01.900 QEMU NVMe Ctrl (12340 ): 6383 I/Os completed (+1404) 00:13:01.900 QEMU NVMe Ctrl (12341 ): 6975 I/Os completed (+1534) 00:13:01.900 00:13:02.463 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:02.463 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:02.463 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:02.463 [2024-07-25 11:38:01.401914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:02.463 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:02.463 [2024-07-25 11:38:01.404689] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.404803] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.404843] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.404878] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:02.463 [2024-07-25 11:38:01.408532] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.408613] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.408650] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.408680] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:02.463 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:02.463 [2024-07-25 11:38:01.431285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:02.463 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:02.463 [2024-07-25 11:38:01.433724] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.433803] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.433848] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.433879] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:02.463 [2024-07-25 11:38:01.437769] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.437849] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.437887] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 [2024-07-25 11:38:01.437914] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.463 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:02.463 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:02.720 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:02.720 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:02.720 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:02.720 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:02.720 00:13:02.720 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:02.720 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:02.720 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:02.720 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:02.720 Attaching to 0000:00:10.0 00:13:02.720 Attached to 0000:00:10.0 00:13:02.721 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:02.721 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:02.721 11:38:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:02.721 Attaching to 0000:00:11.0 00:13:02.721 Attached to 0000:00:11.0 00:13:03.654 QEMU NVMe Ctrl (12340 ): 1644 I/Os completed (+1644) 00:13:03.654 QEMU NVMe Ctrl (12341 ): 1570 I/Os completed (+1570) 00:13:03.654 00:13:05.075 QEMU NVMe Ctrl (12340 ): 3042 I/Os completed (+1398) 00:13:05.075 QEMU NVMe Ctrl (12341 ): 3051 I/Os completed (+1481) 00:13:05.076 00:13:05.641 QEMU NVMe Ctrl (12340 ): 4466 I/Os completed (+1424) 00:13:05.641 QEMU NVMe Ctrl (12341 ): 4530 I/Os completed (+1479) 00:13:05.641 00:13:06.607 QEMU NVMe Ctrl (12340 ): 5846 I/Os completed (+1380) 00:13:06.607 QEMU NVMe Ctrl (12341 ): 6005 I/Os completed (+1475) 00:13:06.607 00:13:07.981 QEMU NVMe Ctrl (12340 ): 7385 I/Os completed (+1539) 00:13:07.981 QEMU NVMe Ctrl (12341 ): 7600 I/Os completed (+1595) 00:13:07.981 00:13:08.915 QEMU NVMe Ctrl (12340 ): 9029 I/Os completed (+1644) 00:13:08.915 QEMU NVMe Ctrl (12341 ): 9296 I/Os completed (+1696) 00:13:08.915 00:13:09.853 QEMU NVMe Ctrl (12340 ): 10575 I/Os completed (+1546) 00:13:09.853 QEMU NVMe Ctrl (12341 ): 10898 I/Os completed (+1602) 00:13:09.853 00:13:10.787 QEMU NVMe Ctrl (12340 ): 11926 I/Os completed (+1351) 00:13:10.787 QEMU NVMe Ctrl (12341 ): 12369 I/Os completed (+1471) 00:13:10.787 00:13:11.720 QEMU NVMe Ctrl (12340 ): 13365 I/Os completed (+1439) 00:13:11.720 QEMU NVMe Ctrl (12341 ): 13899 I/Os completed (+1530) 00:13:11.720 00:13:12.655 QEMU NVMe Ctrl (12340 ): 14982 I/Os completed (+1617) 00:13:12.655 QEMU NVMe Ctrl (12341 ): 15658 I/Os completed (+1759) 00:13:12.655 00:13:14.027 QEMU NVMe Ctrl (12340 ): 16542 I/Os completed (+1560) 00:13:14.027 QEMU NVMe Ctrl (12341 ): 17318 I/Os completed (+1660) 00:13:14.027 00:13:14.593 QEMU NVMe Ctrl (12340 ): 17971 I/Os completed (+1429) 00:13:14.593 QEMU NVMe Ctrl (12341 ): 18914 I/Os completed (+1596) 00:13:14.593 00:13:14.850 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:14.850 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:14.850 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:14.850 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:14.850 [2024-07-25 11:38:13.728842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:14.850 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:14.850 [2024-07-25 11:38:13.733471] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.733772] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.734122] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.734373] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:14.850 [2024-07-25 11:38:13.740421] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.740519] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.740573] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.740621] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:14.850 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:14.850 [2024-07-25 11:38:13.778352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:14.850 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:14.850 [2024-07-25 11:38:13.781099] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.781324] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.781533] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.781732] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:14.850 [2024-07-25 11:38:13.785496] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.785694] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.850 [2024-07-25 11:38:13.785795] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.851 [2024-07-25 11:38:13.785907] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.851 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:14.851 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:14.851 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:14.851 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:14.851 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:15.109 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:15.109 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:15.109 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:15.109 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:15.109 11:38:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:15.109 Attaching to 0000:00:10.0 00:13:15.109 Attached to 0000:00:10.0 00:13:15.109 11:38:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:15.109 11:38:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:15.109 11:38:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:15.109 Attaching to 0000:00:11.0 00:13:15.109 Attached to 0000:00:11.0 00:13:15.675 QEMU NVMe Ctrl (12340 ): 919 I/Os completed (+919) 00:13:15.675 QEMU NVMe Ctrl (12341 ): 842 I/Os completed (+842) 00:13:15.675 00:13:16.609 QEMU NVMe Ctrl (12340 ): 2099 I/Os completed (+1180) 00:13:16.609 QEMU NVMe Ctrl (12341 ): 2182 I/Os completed (+1340) 00:13:16.609 00:13:18.025 QEMU NVMe Ctrl (12340 ): 3564 I/Os completed (+1465) 00:13:18.025 QEMU NVMe Ctrl (12341 ): 3688 I/Os completed (+1506) 00:13:18.025 00:13:18.958 QEMU NVMe Ctrl (12340 ): 5125 I/Os completed (+1561) 00:13:18.958 QEMU NVMe Ctrl (12341 ): 5299 I/Os completed (+1611) 00:13:18.958 00:13:19.894 QEMU NVMe Ctrl (12340 ): 6389 I/Os completed (+1264) 00:13:19.894 QEMU NVMe Ctrl (12341 ): 6686 I/Os completed (+1387) 00:13:19.894 00:13:20.830 QEMU NVMe Ctrl (12340 ): 7799 I/Os completed (+1410) 00:13:20.830 QEMU NVMe Ctrl (12341 ): 8190 I/Os completed (+1504) 00:13:20.830 00:13:21.766 QEMU NVMe Ctrl (12340 ): 9397 I/Os completed (+1598) 00:13:21.766 QEMU NVMe Ctrl (12341 ): 9806 I/Os completed (+1616) 00:13:21.766 00:13:22.700 QEMU NVMe Ctrl (12340 ): 11088 I/Os completed (+1691) 00:13:22.700 QEMU NVMe Ctrl (12341 ): 11509 I/Os completed (+1703) 00:13:22.700 00:13:23.633 QEMU NVMe Ctrl (12340 ): 12708 I/Os completed (+1620) 00:13:23.633 QEMU NVMe Ctrl (12341 ): 13154 I/Os completed (+1645) 00:13:23.633 00:13:25.007 QEMU NVMe Ctrl (12340 ): 14396 I/Os completed (+1688) 00:13:25.007 QEMU NVMe Ctrl (12341 ): 14870 I/Os completed (+1716) 00:13:25.007 00:13:25.942 QEMU NVMe Ctrl (12340 ): 16027 I/Os completed (+1631) 00:13:25.942 QEMU NVMe Ctrl (12341 ): 16518 I/Os completed (+1648) 00:13:25.942 00:13:26.877 QEMU NVMe Ctrl (12340 ): 17643 I/Os completed (+1616) 00:13:26.877 QEMU NVMe Ctrl (12341 ): 18146 I/Os completed (+1628) 00:13:26.877 00:13:27.135 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:27.135 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:27.135 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:27.135 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:27.135 [2024-07-25 11:38:26.088400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:27.135 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:27.135 [2024-07-25 11:38:26.090646] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.090912] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.091011] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.091141] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:27.135 [2024-07-25 11:38:26.094734] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.094802] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.094832] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.094857] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:27.135 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:27.135 [2024-07-25 11:38:26.119853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:27.135 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:27.135 [2024-07-25 11:38:26.122352] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.122442] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.122487] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.122522] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:27.135 [2024-07-25 11:38:26.126105] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.126188] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.126248] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 [2024-07-25 11:38:26.126282] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.135 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:27.135 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:27.135 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:27.135 EAL: Scan for (pci) bus failed. 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:27.394 Attaching to 0000:00:10.0 00:13:27.394 Attached to 0000:00:10.0 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:27.394 11:38:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:27.394 Attaching to 0000:00:11.0 00:13:27.394 Attached to 0000:00:11.0 00:13:27.394 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:27.394 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:27.652 [2024-07-25 11:38:26.450600] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:39.850 11:38:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:39.850 11:38:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:39.850 11:38:38 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.04 00:13:39.850 11:38:38 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.04 00:13:39.850 11:38:38 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:39.850 11:38:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.04 00:13:39.850 11:38:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.04 2 00:13:39.850 remove_attach_helper took 43.04s to complete (handling 2 nvme drive(s)) 11:38:38 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:46.406 11:38:44 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 72464 00:13:46.406 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (72464) - No such process 00:13:46.406 11:38:44 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 72464 00:13:46.406 11:38:44 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:46.406 11:38:44 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:46.406 11:38:44 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:46.406 11:38:44 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=72998 00:13:46.406 11:38:44 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:46.406 11:38:44 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:46.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.406 11:38:44 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 72998 00:13:46.406 11:38:44 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 72998 ']' 00:13:46.406 11:38:44 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.406 11:38:44 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:46.406 11:38:44 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.406 11:38:44 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:46.406 11:38:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:46.406 [2024-07-25 11:38:44.604201] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:46.406 [2024-07-25 11:38:44.604389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72998 ] 00:13:46.406 [2024-07-25 11:38:44.773594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.406 [2024-07-25 11:38:45.057658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.972 11:38:45 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.972 11:38:45 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:13:46.972 11:38:45 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:46.972 11:38:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.972 11:38:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:46.972 11:38:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.972 11:38:45 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:46.972 11:38:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:46.972 11:38:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:46.972 11:38:45 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:46.972 11:38:45 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:46.972 11:38:45 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:46.972 11:38:45 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:46.972 11:38:45 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:46.972 11:38:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:46.972 11:38:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:46.972 11:38:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:46.972 11:38:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:46.972 11:38:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:53.527 11:38:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:53.527 11:38:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.527 11:38:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:53.527 11:38:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.527 [2024-07-25 11:38:52.006207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:53.527 [2024-07-25 11:38:52.009317] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.527 [2024-07-25 11:38:52.009381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.527 [2024-07-25 11:38:52.009430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.527 [2024-07-25 11:38:52.009465] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.527 [2024-07-25 11:38:52.009488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.527 [2024-07-25 11:38:52.009504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.527 [2024-07-25 11:38:52.009524] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.527 [2024-07-25 11:38:52.009540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.527 [2024-07-25 11:38:52.009558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.527 [2024-07-25 11:38:52.009574] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.527 [2024-07-25 11:38:52.009595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.527 [2024-07-25 11:38:52.009610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.527 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:53.527 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:53.527 [2024-07-25 11:38:52.406246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:53.527 [2024-07-25 11:38:52.409582] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.527 [2024-07-25 11:38:52.409646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.527 [2024-07-25 11:38:52.409673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.527 [2024-07-25 11:38:52.409710] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.527 [2024-07-25 11:38:52.409727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.527 [2024-07-25 11:38:52.409746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.527 [2024-07-25 11:38:52.409763] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.527 [2024-07-25 11:38:52.409781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.527 [2024-07-25 11:38:52.409796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.527 [2024-07-25 11:38:52.409815] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.527 [2024-07-25 11:38:52.409831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.527 [2024-07-25 11:38:52.409849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.527 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:53.527 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:53.527 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:53.527 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:53.527 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:53.527 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:53.527 11:38:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.527 11:38:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:53.527 11:38:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.527 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:53.527 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:53.816 11:38:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:06.008 11:39:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.008 11:39:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:06.008 11:39:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:06.008 11:39:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:06.008 11:39:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.008 11:39:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:06.008 [2024-07-25 11:39:05.006520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:06.008 [2024-07-25 11:39:05.010131] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.008 [2024-07-25 11:39:05.010192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.008 [2024-07-25 11:39:05.010223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.008 [2024-07-25 11:39:05.010260] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.008 [2024-07-25 11:39:05.010281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.008 [2024-07-25 11:39:05.010297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.008 [2024-07-25 11:39:05.010316] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.008 [2024-07-25 11:39:05.010332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.008 [2024-07-25 11:39:05.010349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.008 [2024-07-25 11:39:05.010365] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.008 [2024-07-25 11:39:05.010383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.008 [2024-07-25 11:39:05.010398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.008 11:39:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.008 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:06.008 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:06.574 [2024-07-25 11:39:05.506501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:06.574 [2024-07-25 11:39:05.510154] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.574 [2024-07-25 11:39:05.510312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.574 [2024-07-25 11:39:05.510339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.574 [2024-07-25 11:39:05.510380] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.574 [2024-07-25 11:39:05.510396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.574 [2024-07-25 11:39:05.510414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.574 [2024-07-25 11:39:05.510431] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.574 [2024-07-25 11:39:05.510449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.574 [2024-07-25 11:39:05.510463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.574 [2024-07-25 11:39:05.510482] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.574 [2024-07-25 11:39:05.510496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.574 [2024-07-25 11:39:05.510513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.574 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:06.574 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:06.574 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:06.574 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:06.574 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:06.574 11:39:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.574 11:39:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:06.574 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:06.574 11:39:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.574 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:06.574 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:06.844 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:06.844 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:06.844 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:06.844 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:06.844 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:06.844 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:06.844 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:06.844 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:06.844 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:07.103 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:07.103 11:39:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:19.304 11:39:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.304 11:39:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:19.304 11:39:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:19.304 11:39:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:19.304 11:39:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.304 11:39:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:19.304 [2024-07-25 11:39:18.006793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:19.304 [2024-07-25 11:39:18.010312] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.304 [2024-07-25 11:39:18.010369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.304 [2024-07-25 11:39:18.010405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.304 [2024-07-25 11:39:18.010439] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.304 [2024-07-25 11:39:18.010463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.304 [2024-07-25 11:39:18.010480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.304 [2024-07-25 11:39:18.010512] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.304 [2024-07-25 11:39:18.010530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.304 [2024-07-25 11:39:18.010552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.304 [2024-07-25 11:39:18.010570] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.304 [2024-07-25 11:39:18.010592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.304 [2024-07-25 11:39:18.010608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.304 11:39:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.304 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:19.304 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:19.561 [2024-07-25 11:39:18.406781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:19.561 [2024-07-25 11:39:18.409979] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.561 [2024-07-25 11:39:18.410044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.561 [2024-07-25 11:39:18.410071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.561 [2024-07-25 11:39:18.410105] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.561 [2024-07-25 11:39:18.410123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.561 [2024-07-25 11:39:18.410143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.561 [2024-07-25 11:39:18.410162] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.561 [2024-07-25 11:39:18.410183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.561 [2024-07-25 11:39:18.410199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.561 [2024-07-25 11:39:18.410222] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.562 [2024-07-25 11:39:18.410238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.562 [2024-07-25 11:39:18.410256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.562 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:19.562 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:19.562 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:19.562 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:19.562 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:19.562 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:19.562 11:39:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.562 11:39:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:19.562 11:39:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.819 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:19.819 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:19.819 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:19.819 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:19.819 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:19.819 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:19.819 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:19.819 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:19.819 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:19.819 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:20.077 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:20.077 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:20.077 11:39:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.05 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.05 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:14:32.297 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:14:32.297 11:39:30 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:32.297 11:39:30 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:38.892 11:39:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:38.892 11:39:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:38.892 11:39:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:38.892 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:38.892 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:38.892 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:38.892 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:38.892 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:38.892 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:38.892 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:38.892 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:38.892 11:39:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.893 11:39:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:38.893 [2024-07-25 11:39:37.082649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:38.893 [2024-07-25 11:39:37.084892] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.893 [2024-07-25 11:39:37.084958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.893 [2024-07-25 11:39:37.084993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.893 [2024-07-25 11:39:37.085027] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.893 [2024-07-25 11:39:37.085048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.893 [2024-07-25 11:39:37.085064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.893 [2024-07-25 11:39:37.085084] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.893 [2024-07-25 11:39:37.085100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.893 [2024-07-25 11:39:37.085121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.893 [2024-07-25 11:39:37.085138] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.893 [2024-07-25 11:39:37.085156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.893 [2024-07-25 11:39:37.085171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.893 11:39:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:38.893 [2024-07-25 11:39:37.482691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:38.893 [2024-07-25 11:39:37.485188] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.893 [2024-07-25 11:39:37.485378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.893 [2024-07-25 11:39:37.485413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.893 [2024-07-25 11:39:37.485450] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.893 [2024-07-25 11:39:37.485469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.893 [2024-07-25 11:39:37.485488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.893 [2024-07-25 11:39:37.485506] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.893 [2024-07-25 11:39:37.485525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.893 [2024-07-25 11:39:37.485540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.893 [2024-07-25 11:39:37.485559] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.893 [2024-07-25 11:39:37.485575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.893 [2024-07-25 11:39:37.485593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:38.893 11:39:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.893 11:39:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:38.893 11:39:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:38.893 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:39.150 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:39.151 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:39.151 11:39:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:51.435 11:39:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:51.435 11:39:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.435 11:39:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:51.435 11:39:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:51.435 [2024-07-25 11:39:50.082901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:51.435 [2024-07-25 11:39:50.085587] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.435 [2024-07-25 11:39:50.085693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.435 [2024-07-25 11:39:50.085795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.435 [2024-07-25 11:39:50.085880] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.435 [2024-07-25 11:39:50.085958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.435 [2024-07-25 11:39:50.086030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.435 [2024-07-25 11:39:50.086129] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.435 [2024-07-25 11:39:50.086180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.435 [2024-07-25 11:39:50.086246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.435 [2024-07-25 11:39:50.086313] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.435 [2024-07-25 11:39:50.086364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.435 [2024-07-25 11:39:50.086430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:51.435 11:39:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.435 11:39:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:51.435 11:39:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:51.435 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:51.435 [2024-07-25 11:39:50.482901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:51.693 [2024-07-25 11:39:50.485268] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.693 [2024-07-25 11:39:50.485390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.693 [2024-07-25 11:39:50.485414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.693 [2024-07-25 11:39:50.485449] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.693 [2024-07-25 11:39:50.485465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.693 [2024-07-25 11:39:50.485488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.693 [2024-07-25 11:39:50.485503] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.693 [2024-07-25 11:39:50.485520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.693 [2024-07-25 11:39:50.485534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.693 [2024-07-25 11:39:50.485552] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.693 [2024-07-25 11:39:50.485566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.693 [2024-07-25 11:39:50.485583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.693 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:51.693 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:51.693 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:51.693 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:51.693 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:51.693 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:51.693 11:39:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.693 11:39:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:51.693 11:39:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.693 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:51.693 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:51.950 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:51.950 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:51.950 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:51.950 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:51.950 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:51.950 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:51.950 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:51.950 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:51.950 11:39:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:52.242 11:39:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:52.242 11:39:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:04.536 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:04.536 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:04.536 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:04.536 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.536 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:04.536 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:04.537 11:40:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.537 11:40:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:04.537 11:40:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:04.537 [2024-07-25 11:40:03.083153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:04.537 [2024-07-25 11:40:03.085864] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.537 [2024-07-25 11:40:03.086066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.537 [2024-07-25 11:40:03.086333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.537 [2024-07-25 11:40:03.086521] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.537 [2024-07-25 11:40:03.086680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.537 [2024-07-25 11:40:03.086829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.537 [2024-07-25 11:40:03.086935] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.537 [2024-07-25 11:40:03.087063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.537 [2024-07-25 11:40:03.087226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.537 [2024-07-25 11:40:03.087395] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.537 [2024-07-25 11:40:03.087544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.537 [2024-07-25 11:40:03.087695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:04.537 11:40:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.537 11:40:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:04.537 11:40:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:04.537 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:04.537 [2024-07-25 11:40:03.483165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:04.537 [2024-07-25 11:40:03.485844] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.537 [2024-07-25 11:40:03.486086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.537 [2024-07-25 11:40:03.486263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.537 [2024-07-25 11:40:03.486505] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.537 [2024-07-25 11:40:03.486650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.537 [2024-07-25 11:40:03.486815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.537 [2024-07-25 11:40:03.487076] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.537 [2024-07-25 11:40:03.487284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.537 [2024-07-25 11:40:03.487370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.537 [2024-07-25 11:40:03.487549] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.537 [2024-07-25 11:40:03.487604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.537 [2024-07-25 11:40:03.487761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.796 11:40:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.796 11:40:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:04.796 11:40:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:04.796 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:05.054 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:05.054 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:05.054 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:05.054 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:05.054 11:40:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:05.054 11:40:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:05.054 11:40:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:05.054 11:40:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.12 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.12 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.12 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.12 2 00:15:17.386 remove_attach_helper took 45.12s to complete (handling 2 nvme drive(s)) 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:17.386 11:40:16 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 72998 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 72998 ']' 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 72998 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72998 00:15:17.386 killing process with pid 72998 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72998' 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@969 -- # kill 72998 00:15:17.386 11:40:16 sw_hotplug -- common/autotest_common.sh@974 -- # wait 72998 00:15:19.916 11:40:18 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:19.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:20.481 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:20.481 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:20.481 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:20.481 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:20.481 00:15:20.481 real 2m31.928s 00:15:20.481 user 1m52.114s 00:15:20.481 sys 0m19.626s 00:15:20.481 ************************************ 00:15:20.481 END TEST sw_hotplug 00:15:20.481 ************************************ 00:15:20.481 11:40:19 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.481 11:40:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:20.739 11:40:19 -- spdk/autotest.sh@251 -- # [[ 1 -eq 1 ]] 00:15:20.739 11:40:19 -- spdk/autotest.sh@252 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:20.739 11:40:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:20.739 11:40:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.739 11:40:19 -- common/autotest_common.sh@10 -- # set +x 00:15:20.739 ************************************ 00:15:20.739 START TEST nvme_xnvme 00:15:20.739 ************************************ 00:15:20.739 11:40:19 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:20.739 * Looking for test storage... 00:15:20.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:20.739 11:40:19 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.739 11:40:19 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.739 11:40:19 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.739 11:40:19 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.739 11:40:19 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.739 11:40:19 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.739 11:40:19 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.739 11:40:19 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:20.739 11:40:19 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.739 11:40:19 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:15:20.739 11:40:19 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:20.739 11:40:19 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.739 11:40:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.739 ************************************ 00:15:20.739 START TEST xnvme_to_malloc_dd_copy 00:15:20.739 ************************************ 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:20.739 11:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:20.739 { 00:15:20.739 "subsystems": [ 00:15:20.739 { 00:15:20.739 "subsystem": "bdev", 00:15:20.739 "config": [ 00:15:20.739 { 00:15:20.739 "params": { 00:15:20.739 "block_size": 512, 00:15:20.739 "num_blocks": 2097152, 00:15:20.739 "name": "malloc0" 00:15:20.739 }, 00:15:20.739 "method": "bdev_malloc_create" 00:15:20.739 }, 00:15:20.739 { 00:15:20.739 "params": { 00:15:20.739 "io_mechanism": "libaio", 00:15:20.739 "filename": "/dev/nullb0", 00:15:20.739 "name": "null0" 00:15:20.739 }, 00:15:20.739 "method": "bdev_xnvme_create" 00:15:20.739 }, 00:15:20.739 { 00:15:20.739 "method": "bdev_wait_for_examine" 00:15:20.739 } 00:15:20.739 ] 00:15:20.739 } 00:15:20.739 ] 00:15:20.739 } 00:15:20.739 [2024-07-25 11:40:19.788914] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:20.739 [2024-07-25 11:40:19.789297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74342 ] 00:15:20.997 [2024-07-25 11:40:19.971414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.255 [2024-07-25 11:40:20.247782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.386  Copying: 156/1024 [MB] (156 MBps) Copying: 312/1024 [MB] (155 MBps) Copying: 472/1024 [MB] (159 MBps) Copying: 630/1024 [MB] (158 MBps) Copying: 790/1024 [MB] (160 MBps) Copying: 952/1024 [MB] (161 MBps) Copying: 1024/1024 [MB] (average 158 MBps) 00:15:33.386 00:15:33.386 11:40:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:33.386 11:40:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:33.386 11:40:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:33.386 11:40:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:33.386 { 00:15:33.386 "subsystems": [ 00:15:33.386 { 00:15:33.386 "subsystem": "bdev", 00:15:33.386 "config": [ 00:15:33.386 { 00:15:33.386 "params": { 00:15:33.386 "block_size": 512, 00:15:33.386 "num_blocks": 2097152, 00:15:33.386 "name": "malloc0" 00:15:33.387 }, 00:15:33.387 "method": "bdev_malloc_create" 00:15:33.387 }, 00:15:33.387 { 00:15:33.387 "params": { 00:15:33.387 "io_mechanism": "libaio", 00:15:33.387 "filename": "/dev/nullb0", 00:15:33.387 "name": "null0" 00:15:33.387 }, 00:15:33.387 "method": "bdev_xnvme_create" 00:15:33.387 }, 00:15:33.387 { 00:15:33.387 "method": "bdev_wait_for_examine" 00:15:33.387 } 00:15:33.387 ] 00:15:33.387 } 00:15:33.387 ] 00:15:33.387 } 00:15:33.387 [2024-07-25 11:40:32.104367] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:33.387 [2024-07-25 11:40:32.104571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74479 ] 00:15:33.387 [2024-07-25 11:40:32.283583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.644 [2024-07-25 11:40:32.522124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.325  Copying: 165/1024 [MB] (165 MBps) Copying: 335/1024 [MB] (169 MBps) Copying: 498/1024 [MB] (162 MBps) Copying: 662/1024 [MB] (163 MBps) Copying: 824/1024 [MB] (162 MBps) Copying: 986/1024 [MB] (161 MBps) Copying: 1024/1024 [MB] (average 164 MBps) 00:15:45.325 00:15:45.325 11:40:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:45.325 11:40:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:45.325 11:40:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:45.325 11:40:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:45.325 11:40:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:45.325 11:40:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:45.325 { 00:15:45.325 "subsystems": [ 00:15:45.325 { 00:15:45.325 "subsystem": "bdev", 00:15:45.325 "config": [ 00:15:45.325 { 00:15:45.325 "params": { 00:15:45.325 "block_size": 512, 00:15:45.325 "num_blocks": 2097152, 00:15:45.325 "name": "malloc0" 00:15:45.325 }, 00:15:45.325 "method": "bdev_malloc_create" 00:15:45.325 }, 00:15:45.325 { 00:15:45.325 "params": { 00:15:45.325 "io_mechanism": "io_uring", 00:15:45.325 "filename": "/dev/nullb0", 00:15:45.326 "name": "null0" 00:15:45.326 }, 00:15:45.326 "method": "bdev_xnvme_create" 00:15:45.326 }, 00:15:45.326 { 00:15:45.326 "method": "bdev_wait_for_examine" 00:15:45.326 } 00:15:45.326 ] 00:15:45.326 } 00:15:45.326 ] 00:15:45.326 } 00:15:45.326 [2024-07-25 11:40:44.147398] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:45.326 [2024-07-25 11:40:44.147614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74611 ] 00:15:45.326 [2024-07-25 11:40:44.329202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.583 [2024-07-25 11:40:44.621644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.440  Copying: 168/1024 [MB] (168 MBps) Copying: 337/1024 [MB] (168 MBps) Copying: 504/1024 [MB] (167 MBps) Copying: 685/1024 [MB] (180 MBps) Copying: 856/1024 [MB] (171 MBps) Copying: 1024/1024 [MB] (average 171 MBps) 00:15:57.440 00:15:57.440 11:40:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:57.440 11:40:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:57.440 11:40:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:57.440 11:40:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:57.440 { 00:15:57.440 "subsystems": [ 00:15:57.440 { 00:15:57.440 "subsystem": "bdev", 00:15:57.440 "config": [ 00:15:57.440 { 00:15:57.440 "params": { 00:15:57.440 "block_size": 512, 00:15:57.440 "num_blocks": 2097152, 00:15:57.440 "name": "malloc0" 00:15:57.440 }, 00:15:57.440 "method": "bdev_malloc_create" 00:15:57.440 }, 00:15:57.440 { 00:15:57.440 "params": { 00:15:57.440 "io_mechanism": "io_uring", 00:15:57.441 "filename": "/dev/nullb0", 00:15:57.441 "name": "null0" 00:15:57.441 }, 00:15:57.441 "method": "bdev_xnvme_create" 00:15:57.441 }, 00:15:57.441 { 00:15:57.441 "method": "bdev_wait_for_examine" 00:15:57.441 } 00:15:57.441 ] 00:15:57.441 } 00:15:57.441 ] 00:15:57.441 } 00:15:57.441 [2024-07-25 11:40:55.963882] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:57.441 [2024-07-25 11:40:55.964096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74738 ] 00:15:57.441 [2024-07-25 11:40:56.144750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.441 [2024-07-25 11:40:56.400466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.649  Copying: 186/1024 [MB] (186 MBps) Copying: 374/1024 [MB] (187 MBps) Copying: 560/1024 [MB] (185 MBps) Copying: 741/1024 [MB] (181 MBps) Copying: 918/1024 [MB] (176 MBps) Copying: 1024/1024 [MB] (average 182 MBps) 00:16:08.649 00:16:08.649 11:41:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:16:08.649 11:41:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:16:08.649 ************************************ 00:16:08.649 END TEST xnvme_to_malloc_dd_copy 00:16:08.649 ************************************ 00:16:08.649 00:16:08.649 real 0m47.625s 00:16:08.649 user 0m41.498s 00:16:08.649 sys 0m5.539s 00:16:08.649 11:41:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.649 11:41:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:08.649 11:41:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:08.649 11:41:07 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:08.649 11:41:07 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.649 11:41:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.649 ************************************ 00:16:08.649 START TEST xnvme_bdevperf 00:16:08.649 ************************************ 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:08.649 11:41:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:08.649 { 00:16:08.649 "subsystems": [ 00:16:08.649 { 00:16:08.649 "subsystem": "bdev", 00:16:08.649 "config": [ 00:16:08.649 { 00:16:08.649 "params": { 00:16:08.649 "io_mechanism": "libaio", 00:16:08.649 "filename": "/dev/nullb0", 00:16:08.649 "name": "null0" 00:16:08.649 }, 00:16:08.649 "method": "bdev_xnvme_create" 00:16:08.649 }, 00:16:08.649 { 00:16:08.649 "method": "bdev_wait_for_examine" 00:16:08.649 } 00:16:08.649 ] 00:16:08.649 } 00:16:08.649 ] 00:16:08.649 } 00:16:08.649 [2024-07-25 11:41:07.458747] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:08.649 [2024-07-25 11:41:07.458987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74891 ] 00:16:08.649 [2024-07-25 11:41:07.635447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.908 [2024-07-25 11:41:07.873879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.475 Running I/O for 5 seconds... 00:16:14.740 00:16:14.740 Latency(us) 00:16:14.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.740 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:14.740 null0 : 5.00 120142.28 469.31 0.00 0.00 529.41 178.73 3172.54 00:16:14.740 =================================================================================================================== 00:16:14.740 Total : 120142.28 469.31 0.00 0.00 529.41 178.73 3172.54 00:16:15.674 11:41:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:15.674 11:41:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:15.674 11:41:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:15.674 11:41:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:15.674 11:41:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:15.674 11:41:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:15.674 { 00:16:15.674 "subsystems": [ 00:16:15.674 { 00:16:15.674 "subsystem": "bdev", 00:16:15.674 "config": [ 00:16:15.674 { 00:16:15.674 "params": { 00:16:15.674 "io_mechanism": "io_uring", 00:16:15.674 "filename": "/dev/nullb0", 00:16:15.674 "name": "null0" 00:16:15.674 }, 00:16:15.674 "method": "bdev_xnvme_create" 00:16:15.674 }, 00:16:15.674 { 00:16:15.674 "method": "bdev_wait_for_examine" 00:16:15.674 } 00:16:15.674 ] 00:16:15.674 } 00:16:15.674 ] 00:16:15.674 } 00:16:15.674 [2024-07-25 11:41:14.591592] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:15.674 [2024-07-25 11:41:14.591798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74971 ] 00:16:15.933 [2024-07-25 11:41:14.771125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.205 [2024-07-25 11:41:15.019398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.462 Running I/O for 5 seconds... 00:16:21.746 00:16:21.746 Latency(us) 00:16:21.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.746 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:21.746 null0 : 5.00 157549.10 615.43 0.00 0.00 403.01 273.69 703.77 00:16:21.746 =================================================================================================================== 00:16:21.746 Total : 157549.10 615.43 0.00 0.00 403.01 273.69 703.77 00:16:22.678 11:41:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:16:22.678 11:41:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:16:22.678 00:16:22.678 real 0m14.318s 00:16:22.678 user 0m11.182s 00:16:22.678 sys 0m2.917s 00:16:22.678 11:41:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.678 ************************************ 00:16:22.678 END TEST xnvme_bdevperf 00:16:22.678 ************************************ 00:16:22.678 11:41:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:22.678 ************************************ 00:16:22.678 END TEST nvme_xnvme 00:16:22.678 ************************************ 00:16:22.678 00:16:22.678 real 1m2.136s 00:16:22.678 user 0m52.750s 00:16:22.678 sys 0m8.573s 00:16:22.678 11:41:21 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.679 11:41:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:22.679 11:41:21 -- spdk/autotest.sh@253 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:22.679 11:41:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:22.679 11:41:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.679 11:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:23.001 ************************************ 00:16:23.001 START TEST blockdev_xnvme 00:16:23.001 ************************************ 00:16:23.001 11:41:21 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:23.001 * Looking for test storage... 00:16:23.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:16:23.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75115 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 75115 00:16:23.001 11:41:21 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:23.001 11:41:21 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 75115 ']' 00:16:23.001 11:41:21 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.001 11:41:21 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.001 11:41:21 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.001 11:41:21 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.001 11:41:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.001 [2024-07-25 11:41:21.952814] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:23.001 [2024-07-25 11:41:21.953149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75115 ] 00:16:23.259 [2024-07-25 11:41:22.136852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.517 [2024-07-25 11:41:22.436934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.451 11:41:23 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.451 11:41:23 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:16:24.451 11:41:23 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:24.451 11:41:23 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:16:24.451 11:41:23 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:24.451 11:41:23 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:24.451 11:41:23 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:24.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:24.966 Waiting for block devices as requested 00:16:24.966 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:24.966 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:24.966 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:25.224 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:30.487 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:16:30.487 nvme0n1 00:16:30.487 nvme1n1 00:16:30.487 nvme2n1 00:16:30.487 nvme2n2 00:16:30.487 nvme2n3 00:16:30.487 nvme3n1 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.487 11:41:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:16:30.487 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "10a9b0e4-16a4-429e-a2c7-52501e1d4abd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "10a9b0e4-16a4-429e-a2c7-52501e1d4abd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "736249f8-3696-4279-b0da-fccc7a5dea56"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "736249f8-3696-4279-b0da-fccc7a5dea56",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d8cdd02b-592f-4979-8c78-52001f96a902"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d8cdd02b-592f-4979-8c78-52001f96a902",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "47a93dc4-fe91-409b-b935-5c1d9261daeb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "47a93dc4-fe91-409b-b935-5c1d9261daeb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "876a94cb-c4a7-4b8a-8399-5828440ca8c8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "876a94cb-c4a7-4b8a-8399-5828440ca8c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "dcccfcac-dbd4-466c-aefb-70abf14fae79"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dcccfcac-dbd4-466c-aefb-70abf14fae79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:30.488 11:41:29 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 75115 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 75115 ']' 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 75115 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75115 00:16:30.488 killing process with pid 75115 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75115' 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 75115 00:16:30.488 11:41:29 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 75115 00:16:33.015 11:41:31 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:33.015 11:41:31 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:33.015 11:41:31 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:33.015 11:41:31 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.015 11:41:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:33.015 ************************************ 00:16:33.015 START TEST bdev_hello_world 00:16:33.015 ************************************ 00:16:33.015 11:41:31 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:33.015 [2024-07-25 11:41:31.858668] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:33.015 [2024-07-25 11:41:31.858878] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75487 ] 00:16:33.015 [2024-07-25 11:41:32.042068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.274 [2024-07-25 11:41:32.292089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.841 [2024-07-25 11:41:32.734823] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:33.841 [2024-07-25 11:41:32.734954] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:33.841 [2024-07-25 11:41:32.734989] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:33.841 [2024-07-25 11:41:32.737672] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:33.841 [2024-07-25 11:41:32.738075] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:33.841 [2024-07-25 11:41:32.738103] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:33.841 [2024-07-25 11:41:32.738306] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:33.841 00:16:33.841 [2024-07-25 11:41:32.738342] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:35.258 00:16:35.258 ************************************ 00:16:35.258 END TEST bdev_hello_world 00:16:35.258 ************************************ 00:16:35.258 real 0m2.222s 00:16:35.258 user 0m1.802s 00:16:35.258 sys 0m0.302s 00:16:35.258 11:41:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.258 11:41:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:35.258 11:41:34 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:35.258 11:41:34 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:35.258 11:41:34 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.258 11:41:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:35.258 ************************************ 00:16:35.258 START TEST bdev_bounds 00:16:35.258 ************************************ 00:16:35.258 Process bdevio pid: 75529 00:16:35.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75529 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75529' 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75529 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 75529 ']' 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.258 11:41:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:35.258 [2024-07-25 11:41:34.139203] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:35.258 [2024-07-25 11:41:34.139410] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75529 ] 00:16:35.515 [2024-07-25 11:41:34.330489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.774 [2024-07-25 11:41:34.686400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.774 [2024-07-25 11:41:34.686544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.774 [2024-07-25 11:41:34.686554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.339 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.339 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:16:36.339 11:41:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:36.339 I/O targets: 00:16:36.339 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:36.339 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:36.339 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:36.339 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:36.339 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:36.339 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:36.339 00:16:36.339 00:16:36.339 CUnit - A unit testing framework for C - Version 2.1-3 00:16:36.339 http://cunit.sourceforge.net/ 00:16:36.339 00:16:36.339 00:16:36.339 Suite: bdevio tests on: nvme3n1 00:16:36.339 Test: blockdev write read block ...passed 00:16:36.339 Test: blockdev write zeroes read block ...passed 00:16:36.339 Test: blockdev write zeroes read no split ...passed 00:16:36.339 Test: blockdev write zeroes read split ...passed 00:16:36.596 Test: blockdev write zeroes read split partial ...passed 00:16:36.596 Test: blockdev reset ...passed 00:16:36.596 Test: blockdev write read 8 blocks ...passed 00:16:36.596 Test: blockdev write read size > 128k ...passed 00:16:36.596 Test: blockdev write read invalid size ...passed 00:16:36.596 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:36.596 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:36.596 Test: blockdev write read max offset ...passed 00:16:36.596 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.596 Test: blockdev writev readv 8 blocks ...passed 00:16:36.596 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.596 Test: blockdev writev readv block ...passed 00:16:36.596 Test: blockdev writev readv size > 128k ...passed 00:16:36.596 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.596 Test: blockdev comparev and writev ...passed 00:16:36.596 Test: blockdev nvme passthru rw ...passed 00:16:36.596 Test: blockdev nvme passthru vendor specific ...passed 00:16:36.596 Test: blockdev nvme admin passthru ...passed 00:16:36.596 Test: blockdev copy ...passed 00:16:36.596 Suite: bdevio tests on: nvme2n3 00:16:36.596 Test: blockdev write read block ...passed 00:16:36.596 Test: blockdev write zeroes read block ...passed 00:16:36.596 Test: blockdev write zeroes read no split ...passed 00:16:36.596 Test: blockdev write zeroes read split ...passed 00:16:36.596 Test: blockdev write zeroes read split partial ...passed 00:16:36.596 Test: blockdev reset ...passed 00:16:36.596 Test: blockdev write read 8 blocks ...passed 00:16:36.596 Test: blockdev write read size > 128k ...passed 00:16:36.596 Test: blockdev write read invalid size ...passed 00:16:36.596 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:36.596 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:36.596 Test: blockdev write read max offset ...passed 00:16:36.596 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.596 Test: blockdev writev readv 8 blocks ...passed 00:16:36.596 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.596 Test: blockdev writev readv block ...passed 00:16:36.596 Test: blockdev writev readv size > 128k ...passed 00:16:36.596 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.596 Test: blockdev comparev and writev ...passed 00:16:36.596 Test: blockdev nvme passthru rw ...passed 00:16:36.596 Test: blockdev nvme passthru vendor specific ...passed 00:16:36.596 Test: blockdev nvme admin passthru ...passed 00:16:36.596 Test: blockdev copy ...passed 00:16:36.596 Suite: bdevio tests on: nvme2n2 00:16:36.596 Test: blockdev write read block ...passed 00:16:36.596 Test: blockdev write zeroes read block ...passed 00:16:36.596 Test: blockdev write zeroes read no split ...passed 00:16:36.596 Test: blockdev write zeroes read split ...passed 00:16:36.596 Test: blockdev write zeroes read split partial ...passed 00:16:36.596 Test: blockdev reset ...passed 00:16:36.596 Test: blockdev write read 8 blocks ...passed 00:16:36.596 Test: blockdev write read size > 128k ...passed 00:16:36.596 Test: blockdev write read invalid size ...passed 00:16:36.596 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:36.596 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:36.596 Test: blockdev write read max offset ...passed 00:16:36.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.597 Test: blockdev writev readv 8 blocks ...passed 00:16:36.597 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.597 Test: blockdev writev readv block ...passed 00:16:36.597 Test: blockdev writev readv size > 128k ...passed 00:16:36.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.597 Test: blockdev comparev and writev ...passed 00:16:36.597 Test: blockdev nvme passthru rw ...passed 00:16:36.597 Test: blockdev nvme passthru vendor specific ...passed 00:16:36.597 Test: blockdev nvme admin passthru ...passed 00:16:36.597 Test: blockdev copy ...passed 00:16:36.597 Suite: bdevio tests on: nvme2n1 00:16:36.597 Test: blockdev write read block ...passed 00:16:36.597 Test: blockdev write zeroes read block ...passed 00:16:36.597 Test: blockdev write zeroes read no split ...passed 00:16:36.597 Test: blockdev write zeroes read split ...passed 00:16:36.597 Test: blockdev write zeroes read split partial ...passed 00:16:36.597 Test: blockdev reset ...passed 00:16:36.597 Test: blockdev write read 8 blocks ...passed 00:16:36.597 Test: blockdev write read size > 128k ...passed 00:16:36.597 Test: blockdev write read invalid size ...passed 00:16:36.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:36.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:36.597 Test: blockdev write read max offset ...passed 00:16:36.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.597 Test: blockdev writev readv 8 blocks ...passed 00:16:36.597 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.597 Test: blockdev writev readv block ...passed 00:16:36.597 Test: blockdev writev readv size > 128k ...passed 00:16:36.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.597 Test: blockdev comparev and writev ...passed 00:16:36.597 Test: blockdev nvme passthru rw ...passed 00:16:36.597 Test: blockdev nvme passthru vendor specific ...passed 00:16:36.597 Test: blockdev nvme admin passthru ...passed 00:16:36.597 Test: blockdev copy ...passed 00:16:36.597 Suite: bdevio tests on: nvme1n1 00:16:36.597 Test: blockdev write read block ...passed 00:16:36.597 Test: blockdev write zeroes read block ...passed 00:16:36.597 Test: blockdev write zeroes read no split ...passed 00:16:36.855 Test: blockdev write zeroes read split ...passed 00:16:36.855 Test: blockdev write zeroes read split partial ...passed 00:16:36.855 Test: blockdev reset ...passed 00:16:36.855 Test: blockdev write read 8 blocks ...passed 00:16:36.855 Test: blockdev write read size > 128k ...passed 00:16:36.855 Test: blockdev write read invalid size ...passed 00:16:36.855 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:36.855 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:36.855 Test: blockdev write read max offset ...passed 00:16:36.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.855 Test: blockdev writev readv 8 blocks ...passed 00:16:36.855 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.855 Test: blockdev writev readv block ...passed 00:16:36.855 Test: blockdev writev readv size > 128k ...passed 00:16:36.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.855 Test: blockdev comparev and writev ...passed 00:16:36.855 Test: blockdev nvme passthru rw ...passed 00:16:36.855 Test: blockdev nvme passthru vendor specific ...passed 00:16:36.855 Test: blockdev nvme admin passthru ...passed 00:16:36.855 Test: blockdev copy ...passed 00:16:36.855 Suite: bdevio tests on: nvme0n1 00:16:36.855 Test: blockdev write read block ...passed 00:16:36.855 Test: blockdev write zeroes read block ...passed 00:16:36.855 Test: blockdev write zeroes read no split ...passed 00:16:36.855 Test: blockdev write zeroes read split ...passed 00:16:36.855 Test: blockdev write zeroes read split partial ...passed 00:16:36.855 Test: blockdev reset ...passed 00:16:36.855 Test: blockdev write read 8 blocks ...passed 00:16:36.855 Test: blockdev write read size > 128k ...passed 00:16:36.855 Test: blockdev write read invalid size ...passed 00:16:36.855 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:36.855 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:36.855 Test: blockdev write read max offset ...passed 00:16:36.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.855 Test: blockdev writev readv 8 blocks ...passed 00:16:36.855 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.855 Test: blockdev writev readv block ...passed 00:16:36.855 Test: blockdev writev readv size > 128k ...passed 00:16:36.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.855 Test: blockdev comparev and writev ...passed 00:16:36.855 Test: blockdev nvme passthru rw ...passed 00:16:36.855 Test: blockdev nvme passthru vendor specific ...passed 00:16:36.855 Test: blockdev nvme admin passthru ...passed 00:16:36.855 Test: blockdev copy ...passed 00:16:36.855 00:16:36.855 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.855 suites 6 6 n/a 0 0 00:16:36.855 tests 138 138 138 0 0 00:16:36.855 asserts 780 780 780 0 n/a 00:16:36.855 00:16:36.855 Elapsed time = 1.214 seconds 00:16:36.855 0 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75529 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 75529 ']' 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 75529 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75529 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75529' 00:16:36.855 killing process with pid 75529 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 75529 00:16:36.855 11:41:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 75529 00:16:38.229 11:41:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:38.229 00:16:38.229 real 0m3.004s 00:16:38.229 user 0m6.724s 00:16:38.229 sys 0m0.505s 00:16:38.229 11:41:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.229 ************************************ 00:16:38.229 END TEST bdev_bounds 00:16:38.229 ************************************ 00:16:38.229 11:41:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:38.229 11:41:37 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:16:38.229 11:41:37 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:38.229 11:41:37 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.229 11:41:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.229 ************************************ 00:16:38.229 START TEST bdev_nbd 00:16:38.229 ************************************ 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75600 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:38.229 11:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75600 /var/tmp/spdk-nbd.sock 00:16:38.230 11:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 75600 ']' 00:16:38.230 11:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:38.230 11:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:38.230 11:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:38.230 11:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.230 11:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:38.230 [2024-07-25 11:41:37.184106] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:38.230 [2024-07-25 11:41:37.184268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.487 [2024-07-25 11:41:37.359622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.744 [2024-07-25 11:41:37.612974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:39.307 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.564 1+0 records in 00:16:39.564 1+0 records out 00:16:39.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594398 s, 6.9 MB/s 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:39.564 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.823 1+0 records in 00:16:39.823 1+0 records out 00:16:39.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523277 s, 7.8 MB/s 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:39.823 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.081 1+0 records in 00:16:40.081 1+0 records out 00:16:40.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516663 s, 7.9 MB/s 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:40.081 11:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:16:40.339 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:40.339 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:40.339 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:40.339 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:16:40.339 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:40.339 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:40.339 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:40.339 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.340 1+0 records in 00:16:40.340 1+0 records out 00:16:40.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703726 s, 5.8 MB/s 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:40.340 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.598 1+0 records in 00:16:40.598 1+0 records out 00:16:40.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592008 s, 6.9 MB/s 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:40.598 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.895 1+0 records in 00:16:40.895 1+0 records out 00:16:40.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007779 s, 5.3 MB/s 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:40.895 11:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:41.154 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd0", 00:16:41.154 "bdev_name": "nvme0n1" 00:16:41.154 }, 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd1", 00:16:41.154 "bdev_name": "nvme1n1" 00:16:41.154 }, 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd2", 00:16:41.154 "bdev_name": "nvme2n1" 00:16:41.154 }, 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd3", 00:16:41.154 "bdev_name": "nvme2n2" 00:16:41.154 }, 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd4", 00:16:41.154 "bdev_name": "nvme2n3" 00:16:41.154 }, 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd5", 00:16:41.154 "bdev_name": "nvme3n1" 00:16:41.154 } 00:16:41.154 ]' 00:16:41.154 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:41.154 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd0", 00:16:41.154 "bdev_name": "nvme0n1" 00:16:41.154 }, 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd1", 00:16:41.154 "bdev_name": "nvme1n1" 00:16:41.154 }, 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd2", 00:16:41.154 "bdev_name": "nvme2n1" 00:16:41.154 }, 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd3", 00:16:41.154 "bdev_name": "nvme2n2" 00:16:41.154 }, 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd4", 00:16:41.154 "bdev_name": "nvme2n3" 00:16:41.154 }, 00:16:41.154 { 00:16:41.154 "nbd_device": "/dev/nbd5", 00:16:41.154 "bdev_name": "nvme3n1" 00:16:41.154 } 00:16:41.154 ]' 00:16:41.154 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:41.411 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:41.411 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:41.411 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:41.411 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.411 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:41.411 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.411 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:41.669 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:41.669 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:41.669 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:41.669 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.669 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.669 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:41.669 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:41.669 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.669 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.669 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:41.927 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:41.927 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:41.927 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:41.927 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.927 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.927 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:41.927 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:41.927 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.927 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.927 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:42.185 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:42.185 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:42.185 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:42.185 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.185 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.185 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:42.185 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:42.185 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.185 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.185 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:42.442 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:42.443 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:42.443 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:42.443 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.443 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.443 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:42.443 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:42.443 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.443 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.443 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:43.007 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:43.007 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:43.007 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:43.007 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.007 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.007 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:43.007 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:43.007 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.007 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.007 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:43.007 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:43.007 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:43.007 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:43.007 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.007 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.007 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:43.007 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:43.008 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.008 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:43.008 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.008 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:43.265 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:43.265 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:43.265 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:43.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:43.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:43.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:43.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:43.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:43.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:43.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:43.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:43.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:43.523 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:43.780 /dev/nbd0 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:43.780 1+0 records in 00:16:43.780 1+0 records out 00:16:43.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541989 s, 7.6 MB/s 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:43.780 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:16:44.038 /dev/nbd1 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.038 1+0 records in 00:16:44.038 1+0 records out 00:16:44.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639016 s, 6.4 MB/s 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:44.038 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:16:44.296 /dev/nbd10 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.296 1+0 records in 00:16:44.296 1+0 records out 00:16:44.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638564 s, 6.4 MB/s 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:44.296 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:16:44.554 /dev/nbd11 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.554 1+0 records in 00:16:44.554 1+0 records out 00:16:44.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631797 s, 6.5 MB/s 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:44.554 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:16:44.813 /dev/nbd12 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.813 1+0 records in 00:16:44.813 1+0 records out 00:16:44.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000991353 s, 4.1 MB/s 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:44.813 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:45.071 /dev/nbd13 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.071 1+0 records in 00:16:45.071 1+0 records out 00:16:45.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000989673 s, 4.1 MB/s 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:45.071 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:45.329 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd0", 00:16:45.330 "bdev_name": "nvme0n1" 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd1", 00:16:45.330 "bdev_name": "nvme1n1" 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd10", 00:16:45.330 "bdev_name": "nvme2n1" 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd11", 00:16:45.330 "bdev_name": "nvme2n2" 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd12", 00:16:45.330 "bdev_name": "nvme2n3" 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd13", 00:16:45.330 "bdev_name": "nvme3n1" 00:16:45.330 } 00:16:45.330 ]' 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd0", 00:16:45.330 "bdev_name": "nvme0n1" 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd1", 00:16:45.330 "bdev_name": "nvme1n1" 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd10", 00:16:45.330 "bdev_name": "nvme2n1" 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd11", 00:16:45.330 "bdev_name": "nvme2n2" 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd12", 00:16:45.330 "bdev_name": "nvme2n3" 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "nbd_device": "/dev/nbd13", 00:16:45.330 "bdev_name": "nvme3n1" 00:16:45.330 } 00:16:45.330 ]' 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:45.330 /dev/nbd1 00:16:45.330 /dev/nbd10 00:16:45.330 /dev/nbd11 00:16:45.330 /dev/nbd12 00:16:45.330 /dev/nbd13' 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:45.330 /dev/nbd1 00:16:45.330 /dev/nbd10 00:16:45.330 /dev/nbd11 00:16:45.330 /dev/nbd12 00:16:45.330 /dev/nbd13' 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:45.330 256+0 records in 00:16:45.330 256+0 records out 00:16:45.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647793 s, 162 MB/s 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:45.330 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:45.588 256+0 records in 00:16:45.588 256+0 records out 00:16:45.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138471 s, 7.6 MB/s 00:16:45.588 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:45.588 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:45.846 256+0 records in 00:16:45.846 256+0 records out 00:16:45.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186476 s, 5.6 MB/s 00:16:45.846 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:45.846 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:45.846 256+0 records in 00:16:45.846 256+0 records out 00:16:45.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169755 s, 6.2 MB/s 00:16:45.846 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:45.846 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:46.103 256+0 records in 00:16:46.103 256+0 records out 00:16:46.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145487 s, 7.2 MB/s 00:16:46.103 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:46.103 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:46.103 256+0 records in 00:16:46.103 256+0 records out 00:16:46.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173483 s, 6.0 MB/s 00:16:46.103 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:46.103 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:46.362 256+0 records in 00:16:46.362 256+0 records out 00:16:46.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160263 s, 6.5 MB/s 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.362 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:46.620 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.621 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.621 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.621 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.621 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.621 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.621 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:46.621 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.621 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.621 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:47.230 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:47.230 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:47.230 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:47.230 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.230 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.230 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:47.230 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:47.230 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.230 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.230 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:47.230 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:47.230 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:47.230 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:47.230 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.230 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.230 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:47.230 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:47.230 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.230 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.230 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:47.488 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:47.488 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:47.488 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:47.488 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.488 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.488 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:47.488 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:47.488 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.488 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.488 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:48.053 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:48.053 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:48.053 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:48.053 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:48.053 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:48.053 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:48.053 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:48.053 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:48.053 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:48.053 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:48.311 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:48.311 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:48.311 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:48.311 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:48.311 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:48.311 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:48.311 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:48.311 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:48.311 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:48.312 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:48.312 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:16:48.569 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:48.827 malloc_lvol_verify 00:16:48.827 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:49.086 3872b2ef-6c76-4d12-b498-1f3b2b8f791f 00:16:49.086 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:49.345 89e6dec8-bb3a-43e9-9c0c-07cc8184bfd0 00:16:49.345 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:49.604 /dev/nbd0 00:16:49.604 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:16:49.604 mke2fs 1.46.5 (30-Dec-2021) 00:16:49.604 Discarding device blocks: 0/4096 done 00:16:49.604 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:49.604 00:16:49.604 Allocating group tables: 0/1 done 00:16:49.604 Writing inode tables: 0/1 done 00:16:49.604 Creating journal (1024 blocks): done 00:16:49.604 Writing superblocks and filesystem accounting information: 0/1 done 00:16:49.604 00:16:49.604 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:16:49.604 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:49.604 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:49.604 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:49.604 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.604 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:49.604 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.604 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:49.862 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.862 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.862 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.862 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.862 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.862 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75600 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 75600 ']' 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 75600 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75600 00:16:49.863 killing process with pid 75600 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75600' 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 75600 00:16:49.863 11:41:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 75600 00:16:51.238 ************************************ 00:16:51.238 END TEST bdev_nbd 00:16:51.238 ************************************ 00:16:51.238 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:51.238 00:16:51.238 real 0m13.081s 00:16:51.238 user 0m18.321s 00:16:51.238 sys 0m4.434s 00:16:51.238 11:41:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.238 11:41:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:51.238 11:41:50 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:51.238 11:41:50 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:16:51.238 11:41:50 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:16:51.238 11:41:50 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:51.238 11:41:50 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:51.238 11:41:50 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.238 11:41:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:51.238 ************************************ 00:16:51.238 START TEST bdev_fio 00:16:51.238 ************************************ 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:51.238 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:51.238 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:51.239 ************************************ 00:16:51.239 START TEST bdev_fio_rw_verify 00:16:51.239 ************************************ 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:51.239 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:16:51.496 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:51.496 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:51.496 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:16:51.496 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:51.496 11:41:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:51.496 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:51.496 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:51.496 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:51.496 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:51.496 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:51.496 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:51.496 fio-3.35 00:16:51.496 Starting 6 threads 00:17:03.691 00:17:03.691 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=76026: Thu Jul 25 11:42:01 2024 00:17:03.691 read: IOPS=27.8k, BW=109MiB/s (114MB/s)(1086MiB/10001msec) 00:17:03.691 slat (usec): min=3, max=2071, avg= 7.22, stdev= 6.36 00:17:03.691 clat (usec): min=126, max=6211, avg=675.88, stdev=247.07 00:17:03.691 lat (usec): min=130, max=6219, avg=683.10, stdev=247.76 00:17:03.691 clat percentiles (usec): 00:17:03.691 | 50.000th=[ 693], 99.000th=[ 1237], 99.900th=[ 2409], 99.990th=[ 5735], 00:17:03.691 | 99.999th=[ 6194] 00:17:03.691 write: IOPS=28.3k, BW=110MiB/s (116MB/s)(1104MiB/10001msec); 0 zone resets 00:17:03.691 slat (usec): min=13, max=5953, avg=26.85, stdev=28.63 00:17:03.691 clat (usec): min=93, max=6639, avg=752.35, stdev=251.54 00:17:03.691 lat (usec): min=120, max=6885, avg=779.20, stdev=253.91 00:17:03.691 clat percentiles (usec): 00:17:03.691 | 50.000th=[ 758], 99.000th=[ 1385], 99.900th=[ 2089], 99.990th=[ 5604], 00:17:03.691 | 99.999th=[ 6587] 00:17:03.691 bw ( KiB/s): min=97912, max=138120, per=100.00%, avg=113173.26, stdev=2129.48, samples=114 00:17:03.691 iops : min=24478, max=34530, avg=28293.11, stdev=532.36, samples=114 00:17:03.691 lat (usec) : 100=0.01%, 250=2.56%, 500=16.00%, 750=36.24%, 1000=36.21% 00:17:03.691 lat (msec) : 2=8.85%, 4=0.11%, 10=0.02% 00:17:03.691 cpu : usr=60.95%, sys=25.93%, ctx=6958, majf=0, minf=23955 00:17:03.691 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.691 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.691 issued rwts: total=278083,282610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.691 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:03.691 00:17:03.691 Run status group 0 (all jobs): 00:17:03.691 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=1086MiB (1139MB), run=10001-10001msec 00:17:03.692 WRITE: bw=110MiB/s (116MB/s), 110MiB/s-110MiB/s (116MB/s-116MB/s), io=1104MiB (1158MB), run=10001-10001msec 00:17:03.951 ----------------------------------------------------- 00:17:03.951 Suppressions used: 00:17:03.951 count bytes template 00:17:03.951 6 48 /usr/src/fio/parse.c 00:17:03.951 4329 415584 /usr/src/fio/iolog.c 00:17:03.951 1 8 libtcmalloc_minimal.so 00:17:03.951 1 904 libcrypto.so 00:17:03.951 ----------------------------------------------------- 00:17:03.951 00:17:03.951 00:17:03.951 real 0m12.532s 00:17:03.951 user 0m38.543s 00:17:03.951 sys 0m15.996s 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:03.951 ************************************ 00:17:03.951 END TEST bdev_fio_rw_verify 00:17:03.951 ************************************ 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "10a9b0e4-16a4-429e-a2c7-52501e1d4abd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "10a9b0e4-16a4-429e-a2c7-52501e1d4abd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "736249f8-3696-4279-b0da-fccc7a5dea56"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "736249f8-3696-4279-b0da-fccc7a5dea56",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d8cdd02b-592f-4979-8c78-52001f96a902"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d8cdd02b-592f-4979-8c78-52001f96a902",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "47a93dc4-fe91-409b-b935-5c1d9261daeb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "47a93dc4-fe91-409b-b935-5c1d9261daeb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "876a94cb-c4a7-4b8a-8399-5828440ca8c8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "876a94cb-c4a7-4b8a-8399-5828440ca8c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "dcccfcac-dbd4-466c-aefb-70abf14fae79"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dcccfcac-dbd4-466c-aefb-70abf14fae79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:03.951 /home/vagrant/spdk_repo/spdk 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:03.951 00:17:03.951 real 0m12.712s 00:17:03.951 user 0m38.648s 00:17:03.951 sys 0m16.072s 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.951 ************************************ 00:17:03.951 END TEST bdev_fio 00:17:03.951 ************************************ 00:17:03.951 11:42:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:03.951 11:42:02 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:03.951 11:42:02 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:03.951 11:42:02 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:03.951 11:42:02 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.951 11:42:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.951 ************************************ 00:17:03.951 START TEST bdev_verify 00:17:03.951 ************************************ 00:17:03.951 11:42:02 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:04.210 [2024-07-25 11:42:03.086416] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:04.210 [2024-07-25 11:42:03.086628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76192 ] 00:17:04.470 [2024-07-25 11:42:03.272811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:04.728 [2024-07-25 11:42:03.562280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.728 [2024-07-25 11:42:03.562294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.294 Running I/O for 5 seconds... 00:17:10.560 00:17:10.560 Latency(us) 00:17:10.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.560 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0x0 length 0xa0000 00:17:10.560 nvme0n1 : 5.05 1547.35 6.04 0.00 0.00 82566.35 11677.32 96278.34 00:17:10.560 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0xa0000 length 0xa0000 00:17:10.560 nvme0n1 : 5.02 1504.27 5.88 0.00 0.00 84931.60 11081.54 110577.11 00:17:10.560 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0x0 length 0xbd0bd 00:17:10.560 nvme1n1 : 5.06 2813.93 10.99 0.00 0.00 45214.12 5421.61 63391.19 00:17:10.560 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:10.560 nvme1n1 : 5.07 2783.68 10.87 0.00 0.00 45718.95 4498.15 64821.06 00:17:10.560 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0x0 length 0x80000 00:17:10.560 nvme2n1 : 5.06 1567.58 6.12 0.00 0.00 81003.94 19899.11 61961.31 00:17:10.560 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0x80000 length 0x80000 00:17:10.560 nvme2n1 : 5.07 1590.08 6.21 0.00 0.00 80077.33 6553.60 74830.20 00:17:10.560 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0x0 length 0x80000 00:17:10.560 nvme2n2 : 5.06 1566.85 6.12 0.00 0.00 80849.31 20256.58 68634.07 00:17:10.560 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0x80000 length 0x80000 00:17:10.560 nvme2n2 : 5.05 1572.44 6.14 0.00 0.00 80806.33 20852.36 63867.81 00:17:10.560 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0x0 length 0x80000 00:17:10.560 nvme2n3 : 5.08 1586.98 6.20 0.00 0.00 79701.75 3470.43 80073.08 00:17:10.560 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0x80000 length 0x80000 00:17:10.560 nvme2n3 : 5.05 1571.52 6.14 0.00 0.00 80702.28 14239.19 67680.81 00:17:10.560 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0x0 length 0x20000 00:17:10.560 nvme3n1 : 5.08 1587.59 6.20 0.00 0.00 79600.17 4944.99 75306.82 00:17:10.560 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.560 Verification LBA range: start 0x20000 length 0x20000 00:17:10.560 nvme3n1 : 5.08 1588.80 6.21 0.00 0.00 79683.15 6136.55 77689.95 00:17:10.560 =================================================================================================================== 00:17:10.560 Total : 21281.05 83.13 0.00 0.00 71619.09 3470.43 110577.11 00:17:11.493 00:17:11.493 real 0m7.558s 00:17:11.493 user 0m11.656s 00:17:11.493 sys 0m1.871s 00:17:11.494 11:42:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.494 ************************************ 00:17:11.494 11:42:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:11.494 END TEST bdev_verify 00:17:11.494 ************************************ 00:17:11.752 11:42:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:11.752 11:42:10 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:11.752 11:42:10 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.752 11:42:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.752 ************************************ 00:17:11.752 START TEST bdev_verify_big_io 00:17:11.752 ************************************ 00:17:11.752 11:42:10 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:11.752 [2024-07-25 11:42:10.683862] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:11.752 [2024-07-25 11:42:10.684063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76305 ] 00:17:12.009 [2024-07-25 11:42:10.854263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:12.268 [2024-07-25 11:42:11.163253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.268 [2024-07-25 11:42:11.163269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.833 Running I/O for 5 seconds... 00:17:19.401 00:17:19.401 Latency(us) 00:17:19.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.401 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0x0 length 0xa000 00:17:19.401 nvme0n1 : 5.97 117.88 7.37 0.00 0.00 1035822.50 14596.65 972315.93 00:17:19.401 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0xa000 length 0xa000 00:17:19.401 nvme0n1 : 6.01 114.52 7.16 0.00 0.00 1089611.51 116296.61 1616713.54 00:17:19.401 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0x0 length 0xbd0b 00:17:19.401 nvme1n1 : 5.99 130.84 8.18 0.00 0.00 900997.80 71493.82 983754.94 00:17:19.401 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:19.401 nvme1n1 : 5.99 149.52 9.34 0.00 0.00 810322.58 93418.59 827421.79 00:17:19.401 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0x0 length 0x8000 00:17:19.401 nvme2n1 : 5.98 137.89 8.62 0.00 0.00 836528.86 92941.96 957063.91 00:17:19.401 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0x8000 length 0x8000 00:17:19.401 nvme2n1 : 6.01 133.58 8.35 0.00 0.00 877371.60 104380.97 899868.86 00:17:19.401 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0x0 length 0x8000 00:17:19.401 nvme2n2 : 6.01 69.24 4.33 0.00 0.00 1639433.45 200182.69 2821622.69 00:17:19.401 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0x8000 length 0x8000 00:17:19.401 nvme2n2 : 6.00 96.04 6.00 0.00 0.00 1185315.79 88175.71 1731103.65 00:17:19.401 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0x0 length 0x8000 00:17:19.401 nvme2n3 : 6.00 146.74 9.17 0.00 0.00 762320.10 12034.79 1182031.13 00:17:19.401 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0x8000 length 0x8000 00:17:19.401 nvme2n3 : 6.01 127.69 7.98 0.00 0.00 863286.61 93418.59 1410811.35 00:17:19.401 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0x0 length 0x2000 00:17:19.401 nvme3n1 : 6.01 122.56 7.66 0.00 0.00 885979.05 14239.19 2028517.93 00:17:19.401 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:19.401 Verification LBA range: start 0x2000 length 0x2000 00:17:19.401 nvme3n1 : 6.02 137.38 8.59 0.00 0.00 785857.51 13822.14 1494697.43 00:17:19.401 =================================================================================================================== 00:17:19.401 Total : 1483.89 92.74 0.00 0.00 933016.07 12034.79 2821622.69 00:17:20.388 00:17:20.388 real 0m8.724s 00:17:20.388 user 0m15.434s 00:17:20.388 sys 0m0.660s 00:17:20.388 11:42:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.388 11:42:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.388 ************************************ 00:17:20.388 END TEST bdev_verify_big_io 00:17:20.388 ************************************ 00:17:20.388 11:42:19 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:20.388 11:42:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:20.388 11:42:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:20.388 11:42:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:20.388 ************************************ 00:17:20.388 START TEST bdev_write_zeroes 00:17:20.388 ************************************ 00:17:20.388 11:42:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:20.647 [2024-07-25 11:42:19.465212] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:20.647 [2024-07-25 11:42:19.465393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76420 ] 00:17:20.647 [2024-07-25 11:42:19.632771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.906 [2024-07-25 11:42:19.861755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.474 Running I/O for 1 seconds... 00:17:22.409 00:17:22.409 Latency(us) 00:17:22.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.409 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:22.409 nvme0n1 : 1.01 10175.30 39.75 0.00 0.00 12564.54 8877.15 22997.18 00:17:22.409 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:22.409 nvme1n1 : 1.02 14606.28 57.06 0.00 0.00 8743.49 4587.52 14954.12 00:17:22.409 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:22.409 nvme2n1 : 1.02 10161.36 39.69 0.00 0.00 12491.64 7923.90 21328.99 00:17:22.409 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:22.409 nvme2n2 : 1.02 10145.81 39.63 0.00 0.00 12501.72 8221.79 21328.99 00:17:22.409 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:22.409 nvme2n3 : 1.02 10130.70 39.57 0.00 0.00 12509.74 8579.26 21328.99 00:17:22.409 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:22.409 nvme3n1 : 1.03 10115.01 39.51 0.00 0.00 12516.48 8579.26 21686.46 00:17:22.409 =================================================================================================================== 00:17:22.409 Total : 65334.45 255.21 0.00 0.00 11674.99 4587.52 22997.18 00:17:23.782 00:17:23.782 real 0m3.284s 00:17:23.782 user 0m2.491s 00:17:23.782 sys 0m0.619s 00:17:23.782 11:42:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.782 11:42:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:23.782 ************************************ 00:17:23.782 END TEST bdev_write_zeroes 00:17:23.782 ************************************ 00:17:23.782 11:42:22 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:23.782 11:42:22 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:23.782 11:42:22 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.782 11:42:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.782 ************************************ 00:17:23.782 START TEST bdev_json_nonenclosed 00:17:23.782 ************************************ 00:17:23.782 11:42:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:23.782 [2024-07-25 11:42:22.806520] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:23.782 [2024-07-25 11:42:22.806733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76487 ] 00:17:24.083 [2024-07-25 11:42:22.972384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.340 [2024-07-25 11:42:23.218779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.340 [2024-07-25 11:42:23.218913] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:24.340 [2024-07-25 11:42:23.218964] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:24.340 [2024-07-25 11:42:23.218985] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:24.904 00:17:24.904 real 0m0.955s 00:17:24.904 user 0m0.693s 00:17:24.904 sys 0m0.156s 00:17:24.904 11:42:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:24.904 ************************************ 00:17:24.904 END TEST bdev_json_nonenclosed 00:17:24.904 ************************************ 00:17:24.904 11:42:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:24.904 11:42:23 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:24.904 11:42:23 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:24.904 11:42:23 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:24.904 11:42:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:24.904 ************************************ 00:17:24.904 START TEST bdev_json_nonarray 00:17:24.904 ************************************ 00:17:24.904 11:42:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:24.904 [2024-07-25 11:42:23.826645] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:24.904 [2024-07-25 11:42:23.826851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76519 ] 00:17:25.163 [2024-07-25 11:42:24.010631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.421 [2024-07-25 11:42:24.288712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.421 [2024-07-25 11:42:24.288928] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:25.421 [2024-07-25 11:42:24.288976] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:25.421 [2024-07-25 11:42:24.288998] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:25.680 00:17:25.680 real 0m1.004s 00:17:25.680 user 0m0.722s 00:17:25.680 sys 0m0.173s 00:17:25.680 11:42:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.680 11:42:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:25.680 ************************************ 00:17:25.680 END TEST bdev_json_nonarray 00:17:25.680 ************************************ 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:25.938 11:42:24 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:26.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:38.703 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:38.703 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:43.969 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:43.969 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:43.969 00:17:43.969 real 1m21.214s 00:17:43.969 user 1m48.332s 00:17:43.969 sys 1m8.061s 00:17:43.969 11:42:42 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:43.969 11:42:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:43.969 ************************************ 00:17:43.969 END TEST blockdev_xnvme 00:17:43.969 ************************************ 00:17:43.969 11:42:42 -- spdk/autotest.sh@255 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:43.969 11:42:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:43.969 11:42:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:43.969 11:42:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.969 ************************************ 00:17:43.969 START TEST ublk 00:17:43.969 ************************************ 00:17:43.969 11:42:42 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:44.227 * Looking for test storage... 00:17:44.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:44.227 11:42:43 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:44.227 11:42:43 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:44.227 11:42:43 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:44.227 11:42:43 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:44.227 11:42:43 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:44.227 11:42:43 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:44.227 11:42:43 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:44.227 11:42:43 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:44.227 11:42:43 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:44.227 11:42:43 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:44.227 11:42:43 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.227 11:42:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:44.227 ************************************ 00:17:44.227 START TEST test_save_ublk_config 00:17:44.228 ************************************ 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:17:44.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76826 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76826 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76826 ']' 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.228 11:42:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:44.228 [2024-07-25 11:42:43.219752] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:44.228 [2024-07-25 11:42:43.219943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76826 ] 00:17:44.485 [2024-07-25 11:42:43.393455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.743 [2024-07-25 11:42:43.751622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.676 11:42:44 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.676 11:42:44 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:17:45.676 11:42:44 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:45.676 11:42:44 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:45.676 11:42:44 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.676 11:42:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:45.934 [2024-07-25 11:42:44.813987] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:45.934 [2024-07-25 11:42:44.815378] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:45.934 malloc0 00:17:45.934 [2024-07-25 11:42:44.860647] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:45.934 [2024-07-25 11:42:44.860788] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:45.934 [2024-07-25 11:42:44.860804] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:45.934 [2024-07-25 11:42:44.860819] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:46.869 [2024-07-25 11:42:45.905981] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:46.869 [2024-07-25 11:42:45.906055] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:46.869 [2024-07-25 11:42:45.913966] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:46.869 [2024-07-25 11:42:45.914148] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:47.126 [2024-07-25 11:42:45.930955] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:47.126 0 00:17:47.126 11:42:45 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.126 11:42:45 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:47.126 11:42:45 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.126 11:42:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:47.385 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.385 11:42:46 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:47.385 "subsystems": [ 00:17:47.385 { 00:17:47.385 "subsystem": "keyring", 00:17:47.385 "config": [] 00:17:47.385 }, 00:17:47.385 { 00:17:47.385 "subsystem": "iobuf", 00:17:47.385 "config": [ 00:17:47.385 { 00:17:47.386 "method": "iobuf_set_options", 00:17:47.386 "params": { 00:17:47.386 "small_pool_count": 8192, 00:17:47.386 "large_pool_count": 1024, 00:17:47.386 "small_bufsize": 8192, 00:17:47.386 "large_bufsize": 135168 00:17:47.386 } 00:17:47.386 } 00:17:47.386 ] 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "sock", 00:17:47.386 "config": [ 00:17:47.386 { 00:17:47.386 "method": "sock_set_default_impl", 00:17:47.386 "params": { 00:17:47.386 "impl_name": "posix" 00:17:47.386 } 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "method": "sock_impl_set_options", 00:17:47.386 "params": { 00:17:47.386 "impl_name": "ssl", 00:17:47.386 "recv_buf_size": 4096, 00:17:47.386 "send_buf_size": 4096, 00:17:47.386 "enable_recv_pipe": true, 00:17:47.386 "enable_quickack": false, 00:17:47.386 "enable_placement_id": 0, 00:17:47.386 "enable_zerocopy_send_server": true, 00:17:47.386 "enable_zerocopy_send_client": false, 00:17:47.386 "zerocopy_threshold": 0, 00:17:47.386 "tls_version": 0, 00:17:47.386 "enable_ktls": false 00:17:47.386 } 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "method": "sock_impl_set_options", 00:17:47.386 "params": { 00:17:47.386 "impl_name": "posix", 00:17:47.386 "recv_buf_size": 2097152, 00:17:47.386 "send_buf_size": 2097152, 00:17:47.386 "enable_recv_pipe": true, 00:17:47.386 "enable_quickack": false, 00:17:47.386 "enable_placement_id": 0, 00:17:47.386 "enable_zerocopy_send_server": true, 00:17:47.386 "enable_zerocopy_send_client": false, 00:17:47.386 "zerocopy_threshold": 0, 00:17:47.386 "tls_version": 0, 00:17:47.386 "enable_ktls": false 00:17:47.386 } 00:17:47.386 } 00:17:47.386 ] 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "vmd", 00:17:47.386 "config": [] 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "accel", 00:17:47.386 "config": [ 00:17:47.386 { 00:17:47.386 "method": "accel_set_options", 00:17:47.386 "params": { 00:17:47.386 "small_cache_size": 128, 00:17:47.386 "large_cache_size": 16, 00:17:47.386 "task_count": 2048, 00:17:47.386 "sequence_count": 2048, 00:17:47.386 "buf_count": 2048 00:17:47.386 } 00:17:47.386 } 00:17:47.386 ] 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "bdev", 00:17:47.386 "config": [ 00:17:47.386 { 00:17:47.386 "method": "bdev_set_options", 00:17:47.386 "params": { 00:17:47.386 "bdev_io_pool_size": 65535, 00:17:47.386 "bdev_io_cache_size": 256, 00:17:47.386 "bdev_auto_examine": true, 00:17:47.386 "iobuf_small_cache_size": 128, 00:17:47.386 "iobuf_large_cache_size": 16 00:17:47.386 } 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "method": "bdev_raid_set_options", 00:17:47.386 "params": { 00:17:47.386 "process_window_size_kb": 1024, 00:17:47.386 "process_max_bandwidth_mb_sec": 0 00:17:47.386 } 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "method": "bdev_iscsi_set_options", 00:17:47.386 "params": { 00:17:47.386 "timeout_sec": 30 00:17:47.386 } 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "method": "bdev_nvme_set_options", 00:17:47.386 "params": { 00:17:47.386 "action_on_timeout": "none", 00:17:47.386 "timeout_us": 0, 00:17:47.386 "timeout_admin_us": 0, 00:17:47.386 "keep_alive_timeout_ms": 10000, 00:17:47.386 "arbitration_burst": 0, 00:17:47.386 "low_priority_weight": 0, 00:17:47.386 "medium_priority_weight": 0, 00:17:47.386 "high_priority_weight": 0, 00:17:47.386 "nvme_adminq_poll_period_us": 10000, 00:17:47.386 "nvme_ioq_poll_period_us": 0, 00:17:47.386 "io_queue_requests": 0, 00:17:47.386 "delay_cmd_submit": true, 00:17:47.386 "transport_retry_count": 4, 00:17:47.386 "bdev_retry_count": 3, 00:17:47.386 "transport_ack_timeout": 0, 00:17:47.386 "ctrlr_loss_timeout_sec": 0, 00:17:47.386 "reconnect_delay_sec": 0, 00:17:47.386 "fast_io_fail_timeout_sec": 0, 00:17:47.386 "disable_auto_failback": false, 00:17:47.386 "generate_uuids": false, 00:17:47.386 "transport_tos": 0, 00:17:47.386 "nvme_error_stat": false, 00:17:47.386 "rdma_srq_size": 0, 00:17:47.386 "io_path_stat": false, 00:17:47.386 "allow_accel_sequence": false, 00:17:47.386 "rdma_max_cq_size": 0, 00:17:47.386 "rdma_cm_event_timeout_ms": 0, 00:17:47.386 "dhchap_digests": [ 00:17:47.386 "sha256", 00:17:47.386 "sha384", 00:17:47.386 "sha512" 00:17:47.386 ], 00:17:47.386 "dhchap_dhgroups": [ 00:17:47.386 "null", 00:17:47.386 "ffdhe2048", 00:17:47.386 "ffdhe3072", 00:17:47.386 "ffdhe4096", 00:17:47.386 "ffdhe6144", 00:17:47.386 "ffdhe8192" 00:17:47.386 ] 00:17:47.386 } 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "method": "bdev_nvme_set_hotplug", 00:17:47.386 "params": { 00:17:47.386 "period_us": 100000, 00:17:47.386 "enable": false 00:17:47.386 } 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "method": "bdev_malloc_create", 00:17:47.386 "params": { 00:17:47.386 "name": "malloc0", 00:17:47.386 "num_blocks": 8192, 00:17:47.386 "block_size": 4096, 00:17:47.386 "physical_block_size": 4096, 00:17:47.386 "uuid": "97ac334b-ccd9-4d7b-a642-193adc9de727", 00:17:47.386 "optimal_io_boundary": 0, 00:17:47.386 "md_size": 0, 00:17:47.386 "dif_type": 0, 00:17:47.386 "dif_is_head_of_md": false, 00:17:47.386 "dif_pi_format": 0 00:17:47.386 } 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "method": "bdev_wait_for_examine" 00:17:47.386 } 00:17:47.386 ] 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "scsi", 00:17:47.386 "config": null 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "scheduler", 00:17:47.386 "config": [ 00:17:47.386 { 00:17:47.386 "method": "framework_set_scheduler", 00:17:47.386 "params": { 00:17:47.386 "name": "static" 00:17:47.386 } 00:17:47.386 } 00:17:47.386 ] 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "vhost_scsi", 00:17:47.386 "config": [] 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "vhost_blk", 00:17:47.386 "config": [] 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "ublk", 00:17:47.386 "config": [ 00:17:47.386 { 00:17:47.386 "method": "ublk_create_target", 00:17:47.386 "params": { 00:17:47.386 "cpumask": "1" 00:17:47.386 } 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "method": "ublk_start_disk", 00:17:47.386 "params": { 00:17:47.386 "bdev_name": "malloc0", 00:17:47.386 "ublk_id": 0, 00:17:47.386 "num_queues": 1, 00:17:47.386 "queue_depth": 128 00:17:47.386 } 00:17:47.386 } 00:17:47.386 ] 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "nbd", 00:17:47.386 "config": [] 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "subsystem": "nvmf", 00:17:47.386 "config": [ 00:17:47.386 { 00:17:47.386 "method": "nvmf_set_config", 00:17:47.386 "params": { 00:17:47.386 "discovery_filter": "match_any", 00:17:47.386 "admin_cmd_passthru": { 00:17:47.386 "identify_ctrlr": false 00:17:47.386 } 00:17:47.386 } 00:17:47.386 }, 00:17:47.386 { 00:17:47.386 "method": "nvmf_set_max_subsystems", 00:17:47.386 "params": { 00:17:47.387 "max_subsystems": 1024 00:17:47.387 } 00:17:47.387 }, 00:17:47.387 { 00:17:47.387 "method": "nvmf_set_crdt", 00:17:47.387 "params": { 00:17:47.387 "crdt1": 0, 00:17:47.387 "crdt2": 0, 00:17:47.387 "crdt3": 0 00:17:47.387 } 00:17:47.387 } 00:17:47.387 ] 00:17:47.387 }, 00:17:47.387 { 00:17:47.387 "subsystem": "iscsi", 00:17:47.387 "config": [ 00:17:47.387 { 00:17:47.387 "method": "iscsi_set_options", 00:17:47.387 "params": { 00:17:47.387 "node_base": "iqn.2016-06.io.spdk", 00:17:47.387 "max_sessions": 128, 00:17:47.387 "max_connections_per_session": 2, 00:17:47.387 "max_queue_depth": 64, 00:17:47.387 "default_time2wait": 2, 00:17:47.387 "default_time2retain": 20, 00:17:47.387 "first_burst_length": 8192, 00:17:47.387 "immediate_data": true, 00:17:47.387 "allow_duplicated_isid": false, 00:17:47.387 "error_recovery_level": 0, 00:17:47.387 "nop_timeout": 60, 00:17:47.387 "nop_in_interval": 30, 00:17:47.387 "disable_chap": false, 00:17:47.387 "require_chap": false, 00:17:47.387 "mutual_chap": false, 00:17:47.387 "chap_group": 0, 00:17:47.387 "max_large_datain_per_connection": 64, 00:17:47.387 "max_r2t_per_connection": 4, 00:17:47.387 "pdu_pool_size": 36864, 00:17:47.387 "immediate_data_pool_size": 16384, 00:17:47.387 "data_out_pool_size": 2048 00:17:47.387 } 00:17:47.387 } 00:17:47.387 ] 00:17:47.387 } 00:17:47.387 ] 00:17:47.387 }' 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76826 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76826 ']' 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76826 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76826 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:47.387 killing process with pid 76826 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76826' 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76826 00:17:47.387 11:42:46 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76826 00:17:48.760 [2024-07-25 11:42:47.734078] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:48.760 [2024-07-25 11:42:47.764049] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:48.760 [2024-07-25 11:42:47.764373] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:48.760 [2024-07-25 11:42:47.772989] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:48.760 [2024-07-25 11:42:47.773098] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:48.760 [2024-07-25 11:42:47.773113] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:48.760 [2024-07-25 11:42:47.773161] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:17:48.760 [2024-07-25 11:42:47.777214] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:17:50.136 11:42:49 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76904 00:17:50.136 11:42:49 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76904 00:17:50.136 11:42:49 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:50.136 11:42:49 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76904 ']' 00:17:50.136 11:42:49 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.136 11:42:49 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:50.136 "subsystems": [ 00:17:50.136 { 00:17:50.136 "subsystem": "keyring", 00:17:50.136 "config": [] 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "subsystem": "iobuf", 00:17:50.136 "config": [ 00:17:50.136 { 00:17:50.136 "method": "iobuf_set_options", 00:17:50.136 "params": { 00:17:50.136 "small_pool_count": 8192, 00:17:50.136 "large_pool_count": 1024, 00:17:50.136 "small_bufsize": 8192, 00:17:50.136 "large_bufsize": 135168 00:17:50.136 } 00:17:50.136 } 00:17:50.136 ] 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "subsystem": "sock", 00:17:50.136 "config": [ 00:17:50.136 { 00:17:50.136 "method": "sock_set_default_impl", 00:17:50.136 "params": { 00:17:50.136 "impl_name": "posix" 00:17:50.136 } 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "method": "sock_impl_set_options", 00:17:50.136 "params": { 00:17:50.136 "impl_name": "ssl", 00:17:50.136 "recv_buf_size": 4096, 00:17:50.136 "send_buf_size": 4096, 00:17:50.136 "enable_recv_pipe": true, 00:17:50.136 "enable_quickack": false, 00:17:50.136 "enable_placement_id": 0, 00:17:50.136 "enable_zerocopy_send_server": true, 00:17:50.136 "enable_zerocopy_send_client": false, 00:17:50.136 "zerocopy_threshold": 0, 00:17:50.136 "tls_version": 0, 00:17:50.136 "enable_ktls": false 00:17:50.136 } 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "method": "sock_impl_set_options", 00:17:50.136 "params": { 00:17:50.136 "impl_name": "posix", 00:17:50.136 "recv_buf_size": 2097152, 00:17:50.136 "send_buf_size": 2097152, 00:17:50.136 "enable_recv_pipe": true, 00:17:50.136 "enable_quickack": false, 00:17:50.136 "enable_placement_id": 0, 00:17:50.136 "enable_zerocopy_send_server": true, 00:17:50.136 "enable_zerocopy_send_client": false, 00:17:50.136 "zerocopy_threshold": 0, 00:17:50.136 "tls_version": 0, 00:17:50.136 "enable_ktls": false 00:17:50.136 } 00:17:50.136 } 00:17:50.136 ] 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "subsystem": "vmd", 00:17:50.136 "config": [] 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "subsystem": "accel", 00:17:50.136 "config": [ 00:17:50.136 { 00:17:50.136 "method": "accel_set_options", 00:17:50.136 "params": { 00:17:50.136 "small_cache_size": 128, 00:17:50.136 "large_cache_size": 16, 00:17:50.136 "task_count": 2048, 00:17:50.136 "sequence_count": 2048, 00:17:50.136 "buf_count": 2048 00:17:50.136 } 00:17:50.136 } 00:17:50.136 ] 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "subsystem": "bdev", 00:17:50.136 "config": [ 00:17:50.136 { 00:17:50.136 "method": "bdev_set_options", 00:17:50.136 "params": { 00:17:50.136 "bdev_io_pool_size": 65535, 00:17:50.136 "bdev_io_cache_size": 256, 00:17:50.136 "bdev_auto_examine": true, 00:17:50.136 "iobuf_small_cache_size": 128, 00:17:50.136 "iobuf_large_cache_size": 16 00:17:50.136 } 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "method": "bdev_raid_set_options", 00:17:50.136 "params": { 00:17:50.136 "process_window_size_kb": 1024, 00:17:50.136 "process_max_bandwidth_mb_sec": 0 00:17:50.136 } 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "method": "bdev_iscsi_set_options", 00:17:50.136 "params": { 00:17:50.136 "timeout_sec": 30 00:17:50.136 } 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "method": "bdev_nvme_set_options", 00:17:50.136 "params": { 00:17:50.136 "action_on_timeout": "none", 00:17:50.136 "timeout_us": 0, 00:17:50.136 "timeout_admin_us": 0, 00:17:50.136 "keep_alive_timeout_ms": 10000, 00:17:50.136 "arbitration_burst": 0, 00:17:50.136 "low_priority_weight": 0, 00:17:50.136 "medium_priority_weight": 0, 00:17:50.136 "high_priority_weight": 0, 00:17:50.136 "nvme_adminq_poll_period_us": 10000, 00:17:50.136 "nvme_ioq_poll_period_us": 0, 00:17:50.136 "io_queue_requests": 0, 00:17:50.136 "delay_cmd_submit": true, 00:17:50.136 "transport_retry_count": 4, 00:17:50.136 "bdev_retry_count": 3, 00:17:50.136 "transport_ack_timeout": 0, 00:17:50.136 "ctrlr_loss_timeout_sec": 0, 00:17:50.136 "reconnect_delay_sec": 0, 00:17:50.136 "fast_io_fail_timeout_sec": 0, 00:17:50.136 "disable_auto_failback": false, 00:17:50.136 "generate_uuids": false, 00:17:50.136 "transport_tos": 0, 00:17:50.136 "nvme_error_stat": false, 00:17:50.136 "rdma_srq_size": 0, 00:17:50.136 "io_path_stat": false, 00:17:50.136 "allow_accel_sequence": false, 00:17:50.136 "rdma_max_cq_size": 0, 00:17:50.136 "rdma_cm_event_timeout_ms": 0, 00:17:50.136 "dhchap_digests": [ 00:17:50.136 "sha256", 00:17:50.136 "sha384", 00:17:50.136 "sha512" 00:17:50.136 ], 00:17:50.136 "dhchap_dhgroups": [ 00:17:50.136 "null", 00:17:50.136 "ffdhe2048", 00:17:50.136 "ffdhe3072", 00:17:50.136 "ffdhe4096", 00:17:50.137 "ffdhe6144", 00:17:50.137 "ffdhe8192" 00:17:50.137 ] 00:17:50.137 } 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "method": "bdev_nvme_set_hotplug", 00:17:50.137 "params": { 00:17:50.137 "period_us": 100000, 00:17:50.137 "enable": false 00:17:50.137 } 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "method": "bdev_malloc_create", 00:17:50.137 "params": { 00:17:50.137 "name": "malloc0", 00:17:50.137 "num_blocks": 8192, 00:17:50.137 "block_size": 4096, 00:17:50.137 "physical_block_size": 4096, 00:17:50.137 "uuid": "97ac334b-ccd9-4d7b-a642-193adc9de727", 00:17:50.137 "optimal_io_boundary": 0, 00:17:50.137 "md_size": 0, 00:17:50.137 "dif_type": 0, 00:17:50.137 "dif_is_head_of_md": false, 00:17:50.137 "dif_pi_format": 0 00:17:50.137 } 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "method": "bdev_wait_for_examine" 00:17:50.137 } 00:17:50.137 ] 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "subsystem": "scsi", 00:17:50.137 "config": null 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "subsystem": "scheduler", 00:17:50.137 "config": [ 00:17:50.137 { 00:17:50.137 "method": "framework_set_scheduler", 00:17:50.137 "params": { 00:17:50.137 "name": "static" 00:17:50.137 } 00:17:50.137 } 00:17:50.137 ] 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "subsystem": "vhost_scsi", 00:17:50.137 "config": [] 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "subsystem": "vhost_blk", 00:17:50.137 "config": [] 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "subsystem": "ublk", 00:17:50.137 "config": [ 00:17:50.137 { 00:17:50.137 "method": "ublk_create_target", 00:17:50.137 "params": { 00:17:50.137 "cpumask": "1" 00:17:50.137 } 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "method": "ublk_start_disk", 00:17:50.137 "params": { 00:17:50.137 "bdev_name": "malloc0", 00:17:50.137 "ublk_id": 0, 00:17:50.137 "num_queues": 1, 00:17:50.137 "queue_depth": 128 00:17:50.137 } 00:17:50.137 } 00:17:50.137 ] 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "subsystem": "nbd", 00:17:50.137 "config": [] 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "subsystem": "nvmf", 00:17:50.137 "config": [ 00:17:50.137 { 00:17:50.137 "method": "nvmf_set_config", 00:17:50.137 "params": { 00:17:50.137 "discovery_filter": "match_any", 00:17:50.137 "admin_cmd_passthru": { 00:17:50.137 "identify_ctrlr": false 00:17:50.137 } 00:17:50.137 } 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "method": "nvmf_set_max_subsystems", 00:17:50.137 "params": { 00:17:50.137 "max_subsystems": 1024 00:17:50.137 } 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "method": "nvmf_set_crdt", 00:17:50.137 "params": { 00:17:50.137 "crdt1": 0, 00:17:50.137 "crdt2": 0, 00:17:50.137 "crdt3": 0 00:17:50.137 } 00:17:50.137 } 00:17:50.137 ] 00:17:50.137 }, 00:17:50.137 { 00:17:50.137 "subsystem": "iscsi", 00:17:50.137 "config": [ 00:17:50.137 { 00:17:50.137 "method": "iscsi_set_options", 00:17:50.137 "params": { 00:17:50.137 "node_base": "iqn.2016-06.io.spdk", 00:17:50.137 "max_sessions": 128, 00:17:50.137 "max_connections_per_session": 2, 00:17:50.137 "max_queue_depth": 64, 00:17:50.137 "default_time2wait": 2, 00:17:50.137 "default_time2retain": 20, 00:17:50.137 "first_burst_length": 8192, 00:17:50.137 "immediate_data": true, 00:17:50.137 "allow_duplicated_isid": false, 00:17:50.137 "error_recovery_level": 0, 00:17:50.137 "nop_timeout": 60, 00:17:50.137 "nop_in_interval": 30, 00:17:50.137 "disable_chap": false, 00:17:50.137 "require_chap": false, 00:17:50.137 "mutual_chap": false, 00:17:50.137 "chap_group": 0, 00:17:50.137 "max_large_datain_per_connection": 64, 00:17:50.137 "max_r2t_per_connection": 4, 00:17:50.137 "pdu_pool_size": 36864, 00:17:50.137 "immediate_data_pool_size": 16384, 00:17:50.137 "data_out_pool_size": 2048 00:17:50.137 } 00:17:50.137 } 00:17:50.137 ] 00:17:50.137 } 00:17:50.137 ] 00:17:50.137 }' 00:17:50.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.137 11:42:49 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.137 11:42:49 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.137 11:42:49 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.137 11:42:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:50.396 [2024-07-25 11:42:49.302963] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:50.396 [2024-07-25 11:42:49.303161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76904 ] 00:17:50.653 [2024-07-25 11:42:49.475537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.912 [2024-07-25 11:42:49.727255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.855 [2024-07-25 11:42:50.700945] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:51.855 [2024-07-25 11:42:50.702188] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:51.855 [2024-07-25 11:42:50.709121] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:51.855 [2024-07-25 11:42:50.709240] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:51.855 [2024-07-25 11:42:50.709255] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:51.855 [2024-07-25 11:42:50.709266] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:51.855 [2024-07-25 11:42:50.718052] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:51.855 [2024-07-25 11:42:50.718091] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:51.855 [2024-07-25 11:42:50.724966] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:51.855 [2024-07-25 11:42:50.725147] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:51.855 [2024-07-25 11:42:50.741961] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76904 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76904 ']' 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76904 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.855 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76904 00:17:51.856 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:51.856 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:51.856 killing process with pid 76904 00:17:51.856 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76904' 00:17:51.856 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76904 00:17:51.856 11:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76904 00:17:53.756 [2024-07-25 11:42:52.338824] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:53.756 [2024-07-25 11:42:52.377078] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:53.756 [2024-07-25 11:42:52.377323] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:53.756 [2024-07-25 11:42:52.384963] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:53.756 [2024-07-25 11:42:52.385045] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:53.756 [2024-07-25 11:42:52.385060] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:53.756 [2024-07-25 11:42:52.385097] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:17:53.756 [2024-07-25 11:42:52.385318] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:17:54.693 11:42:53 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:54.693 00:17:54.693 real 0m10.615s 00:17:54.693 user 0m7.069s 00:17:54.693 sys 0m2.016s 00:17:54.693 ************************************ 00:17:54.693 END TEST test_save_ublk_config 00:17:54.693 ************************************ 00:17:54.693 11:42:53 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:54.693 11:42:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:54.952 11:42:53 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76981 00:17:54.952 11:42:53 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:54.952 11:42:53 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.952 11:42:53 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76981 00:17:54.952 11:42:53 ublk -- common/autotest_common.sh@831 -- # '[' -z 76981 ']' 00:17:54.952 11:42:53 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.952 11:42:53 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:54.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.952 11:42:53 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.952 11:42:53 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:54.952 11:42:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:54.952 [2024-07-25 11:42:53.889008] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:54.952 [2024-07-25 11:42:53.889197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76981 ] 00:17:55.210 [2024-07-25 11:42:54.066079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:55.557 [2024-07-25 11:42:54.310001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.557 [2024-07-25 11:42:54.310017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.122 11:42:55 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:56.122 11:42:55 ublk -- common/autotest_common.sh@864 -- # return 0 00:17:56.122 11:42:55 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:56.122 11:42:55 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:56.122 11:42:55 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:56.122 11:42:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:56.122 ************************************ 00:17:56.122 START TEST test_create_ublk 00:17:56.122 ************************************ 00:17:56.122 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:17:56.122 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:56.123 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.123 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:56.123 [2024-07-25 11:42:55.144948] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:56.123 [2024-07-25 11:42:55.147899] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:56.123 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.123 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:56.123 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:56.123 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.123 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:56.380 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.380 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:56.380 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:56.380 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.380 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:56.638 [2024-07-25 11:42:55.441241] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:56.638 [2024-07-25 11:42:55.441839] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:56.638 [2024-07-25 11:42:55.441866] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:56.638 [2024-07-25 11:42:55.441882] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:56.638 [2024-07-25 11:42:55.450483] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:56.638 [2024-07-25 11:42:55.450527] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:56.638 [2024-07-25 11:42:55.456993] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:56.638 [2024-07-25 11:42:55.468296] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:56.638 [2024-07-25 11:42:55.483095] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:56.638 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.638 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:56.638 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:56.638 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:56.638 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.638 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:56.638 11:42:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.638 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:56.639 { 00:17:56.639 "ublk_device": "/dev/ublkb0", 00:17:56.639 "id": 0, 00:17:56.639 "queue_depth": 512, 00:17:56.639 "num_queues": 4, 00:17:56.639 "bdev_name": "Malloc0" 00:17:56.639 } 00:17:56.639 ]' 00:17:56.639 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:56.639 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:56.639 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:56.639 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:56.639 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:56.639 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:56.639 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:56.897 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:56.897 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:56.897 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:56.897 11:42:55 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:56.897 11:42:55 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:56.897 fio: verification read phase will never start because write phase uses all of runtime 00:17:56.897 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:56.897 fio-3.35 00:17:56.897 Starting 1 process 00:18:09.094 00:18:09.094 fio_test: (groupid=0, jobs=1): err= 0: pid=77032: Thu Jul 25 11:43:05 2024 00:18:09.094 write: IOPS=9727, BW=38.0MiB/s (39.8MB/s)(380MiB/10001msec); 0 zone resets 00:18:09.094 clat (usec): min=54, max=11494, avg=101.10, stdev=167.47 00:18:09.094 lat (usec): min=54, max=11517, avg=102.04, stdev=167.51 00:18:09.094 clat percentiles (usec): 00:18:09.094 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:18:09.094 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:18:09.094 | 70.00th=[ 94], 80.00th=[ 98], 90.00th=[ 105], 95.00th=[ 114], 00:18:09.094 | 99.00th=[ 147], 99.50th=[ 176], 99.90th=[ 3359], 99.95th=[ 3785], 00:18:09.094 | 99.99th=[ 4113] 00:18:09.094 bw ( KiB/s): min=17072, max=43336, per=99.43%, avg=38689.68, stdev=5707.13, samples=19 00:18:09.094 iops : min= 4268, max=10834, avg=9672.42, stdev=1426.78, samples=19 00:18:09.094 lat (usec) : 100=83.91%, 250=15.66%, 500=0.03%, 750=0.01%, 1000=0.03% 00:18:09.094 lat (msec) : 2=0.11%, 4=0.22%, 10=0.02%, 20=0.01% 00:18:09.094 cpu : usr=2.56%, sys=6.77%, ctx=97290, majf=0, minf=795 00:18:09.094 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.094 issued rwts: total=0,97287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.094 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.094 00:18:09.094 Run status group 0 (all jobs): 00:18:09.094 WRITE: bw=38.0MiB/s (39.8MB/s), 38.0MiB/s-38.0MiB/s (39.8MB/s-39.8MB/s), io=380MiB (398MB), run=10001-10001msec 00:18:09.094 00:18:09.094 Disk stats (read/write): 00:18:09.094 ublkb0: ios=0/96153, merge=0/0, ticks=0/9002, in_queue=9003, util=99.11% 00:18:09.094 11:43:05 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:09.094 11:43:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 [2024-07-25 11:43:06.005595] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:09.094 [2024-07-25 11:43:06.047992] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:09.094 [2024-07-25 11:43:06.053354] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:09.094 [2024-07-25 11:43:06.062383] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:09.094 [2024-07-25 11:43:06.062785] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:09.094 [2024-07-25 11:43:06.062810] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.094 11:43:06 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 [2024-07-25 11:43:06.075120] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:09.094 request: 00:18:09.094 { 00:18:09.094 "ublk_id": 0, 00:18:09.094 "method": "ublk_stop_disk", 00:18:09.094 "req_id": 1 00:18:09.094 } 00:18:09.094 Got JSON-RPC error response 00:18:09.094 response: 00:18:09.094 { 00:18:09.094 "code": -19, 00:18:09.094 "message": "No such device" 00:18:09.094 } 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:09.094 11:43:06 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 [2024-07-25 11:43:06.095052] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:09.094 [2024-07-25 11:43:06.105182] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:09.094 [2024-07-25 11:43:06.105233] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.094 11:43:06 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.094 11:43:06 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:09.094 11:43:06 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.094 11:43:06 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:09.094 11:43:06 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:09.094 11:43:06 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:09.094 11:43:06 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.094 11:43:06 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:09.094 11:43:06 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:09.094 ************************************ 00:18:09.094 END TEST test_create_ublk 00:18:09.094 ************************************ 00:18:09.094 11:43:06 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:09.094 00:18:09.094 real 0m11.432s 00:18:09.094 user 0m0.688s 00:18:09.094 sys 0m0.779s 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.094 11:43:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 11:43:06 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:09.094 11:43:06 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:09.094 11:43:06 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.094 11:43:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 ************************************ 00:18:09.094 START TEST test_create_multi_ublk 00:18:09.094 ************************************ 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 [2024-07-25 11:43:06.623019] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:09.094 [2024-07-25 11:43:06.625889] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.094 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.094 [2024-07-25 11:43:06.902164] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:09.095 [2024-07-25 11:43:06.902750] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:09.095 [2024-07-25 11:43:06.902779] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:09.095 [2024-07-25 11:43:06.902791] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:09.095 [2024-07-25 11:43:06.911413] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:09.095 [2024-07-25 11:43:06.911439] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:09.095 [2024-07-25 11:43:06.917965] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:09.095 [2024-07-25 11:43:06.918808] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:09.095 [2024-07-25 11:43:06.933011] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:09.095 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.095 11:43:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:09.095 11:43:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:09.095 11:43:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:09.095 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.095 11:43:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.095 [2024-07-25 11:43:07.221122] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:09.095 [2024-07-25 11:43:07.221678] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:09.095 [2024-07-25 11:43:07.221694] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:09.095 [2024-07-25 11:43:07.221708] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:09.095 [2024-07-25 11:43:07.228981] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:09.095 [2024-07-25 11:43:07.229014] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:09.095 [2024-07-25 11:43:07.236966] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:09.095 [2024-07-25 11:43:07.237883] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:09.095 [2024-07-25 11:43:07.253965] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.095 [2024-07-25 11:43:07.547138] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:09.095 [2024-07-25 11:43:07.547697] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:09.095 [2024-07-25 11:43:07.547728] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:09.095 [2024-07-25 11:43:07.547739] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:09.095 [2024-07-25 11:43:07.556385] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:09.095 [2024-07-25 11:43:07.556410] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:09.095 [2024-07-25 11:43:07.562998] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:09.095 [2024-07-25 11:43:07.563851] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:09.095 [2024-07-25 11:43:07.572996] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.095 [2024-07-25 11:43:07.890130] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:09.095 [2024-07-25 11:43:07.890684] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:09.095 [2024-07-25 11:43:07.890709] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:09.095 [2024-07-25 11:43:07.890724] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:09.095 [2024-07-25 11:43:07.899377] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:09.095 [2024-07-25 11:43:07.899411] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:09.095 [2024-07-25 11:43:07.905965] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:09.095 [2024-07-25 11:43:07.906869] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:09.095 [2024-07-25 11:43:07.915022] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:09.095 { 00:18:09.095 "ublk_device": "/dev/ublkb0", 00:18:09.095 "id": 0, 00:18:09.095 "queue_depth": 512, 00:18:09.095 "num_queues": 4, 00:18:09.095 "bdev_name": "Malloc0" 00:18:09.095 }, 00:18:09.095 { 00:18:09.095 "ublk_device": "/dev/ublkb1", 00:18:09.095 "id": 1, 00:18:09.095 "queue_depth": 512, 00:18:09.095 "num_queues": 4, 00:18:09.095 "bdev_name": "Malloc1" 00:18:09.095 }, 00:18:09.095 { 00:18:09.095 "ublk_device": "/dev/ublkb2", 00:18:09.095 "id": 2, 00:18:09.095 "queue_depth": 512, 00:18:09.095 "num_queues": 4, 00:18:09.095 "bdev_name": "Malloc2" 00:18:09.095 }, 00:18:09.095 { 00:18:09.095 "ublk_device": "/dev/ublkb3", 00:18:09.095 "id": 3, 00:18:09.095 "queue_depth": 512, 00:18:09.095 "num_queues": 4, 00:18:09.095 "bdev_name": "Malloc3" 00:18:09.095 } 00:18:09.095 ]' 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:09.095 11:43:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:09.095 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:09.095 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:09.095 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:09.095 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:09.095 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:09.095 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:09.353 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:09.610 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:09.867 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:09.867 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:09.867 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:09.867 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:09.867 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:09.867 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:09.867 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:09.867 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:09.867 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:10.124 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:10.124 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:10.124 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:10.124 11:43:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.124 [2024-07-25 11:43:09.047417] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:10.124 [2024-07-25 11:43:09.078667] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:10.124 [2024-07-25 11:43:09.080199] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:10.124 [2024-07-25 11:43:09.085959] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:10.124 [2024-07-25 11:43:09.086325] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:10.124 [2024-07-25 11:43:09.086341] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.124 [2024-07-25 11:43:09.101077] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:10.124 [2024-07-25 11:43:09.133020] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:10.124 [2024-07-25 11:43:09.134492] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:10.124 [2024-07-25 11:43:09.140969] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:10.124 [2024-07-25 11:43:09.141335] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:10.124 [2024-07-25 11:43:09.141351] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.124 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.124 [2024-07-25 11:43:09.157118] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:10.381 [2024-07-25 11:43:09.197026] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:10.381 [2024-07-25 11:43:09.198424] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:10.381 [2024-07-25 11:43:09.205108] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:10.381 [2024-07-25 11:43:09.205509] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:10.381 [2024-07-25 11:43:09.205526] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:10.381 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.381 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:10.381 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:10.381 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.381 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.381 [2024-07-25 11:43:09.221145] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:10.382 [2024-07-25 11:43:09.253017] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:10.382 [2024-07-25 11:43:09.254336] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:10.382 [2024-07-25 11:43:09.262067] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:10.382 [2024-07-25 11:43:09.262443] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:10.382 [2024-07-25 11:43:09.262459] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:10.382 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.382 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:10.639 [2024-07-25 11:43:09.541103] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:10.639 [2024-07-25 11:43:09.547191] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:10.639 [2024-07-25 11:43:09.547261] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:10.639 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:10.639 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:10.639 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:10.639 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.639 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.896 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.896 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:10.896 11:43:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:10.896 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.896 11:43:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.483 11:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.483 11:43:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:11.483 11:43:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:11.483 11:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.483 11:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.754 11:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.754 11:43:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:11.754 11:43:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:11.754 11:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.754 11:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.011 11:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.011 11:43:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:12.011 11:43:10 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:12.011 11:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.011 11:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.011 11:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.011 11:43:10 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:12.011 11:43:10 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:12.011 11:43:11 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:12.011 11:43:11 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:12.011 11:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.011 11:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.011 11:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.011 11:43:11 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:12.011 11:43:11 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:12.269 ************************************ 00:18:12.269 END TEST test_create_multi_ublk 00:18:12.269 ************************************ 00:18:12.269 11:43:11 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:12.269 00:18:12.269 real 0m4.474s 00:18:12.269 user 0m1.372s 00:18:12.269 sys 0m0.182s 00:18:12.269 11:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.269 11:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.269 11:43:11 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:12.269 11:43:11 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:12.269 11:43:11 ublk -- ublk/ublk.sh@130 -- # killprocess 76981 00:18:12.269 11:43:11 ublk -- common/autotest_common.sh@950 -- # '[' -z 76981 ']' 00:18:12.269 11:43:11 ublk -- common/autotest_common.sh@954 -- # kill -0 76981 00:18:12.269 11:43:11 ublk -- common/autotest_common.sh@955 -- # uname 00:18:12.269 11:43:11 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:12.269 11:43:11 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76981 00:18:12.269 killing process with pid 76981 00:18:12.269 11:43:11 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:12.269 11:43:11 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:12.269 11:43:11 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76981' 00:18:12.269 11:43:11 ublk -- common/autotest_common.sh@969 -- # kill 76981 00:18:12.269 11:43:11 ublk -- common/autotest_common.sh@974 -- # wait 76981 00:18:13.201 [2024-07-25 11:43:12.237611] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:13.201 [2024-07-25 11:43:12.237706] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:14.572 00:18:14.572 real 0m30.461s 00:18:14.572 user 0m43.112s 00:18:14.572 sys 0m7.816s 00:18:14.572 11:43:13 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:14.572 ************************************ 00:18:14.572 11:43:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.572 END TEST ublk 00:18:14.572 ************************************ 00:18:14.572 11:43:13 -- spdk/autotest.sh@256 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:14.572 11:43:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:14.572 11:43:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:14.572 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:18:14.572 ************************************ 00:18:14.572 START TEST ublk_recovery 00:18:14.572 ************************************ 00:18:14.572 11:43:13 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:14.572 * Looking for test storage... 00:18:14.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:14.572 11:43:13 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:14.572 11:43:13 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:14.572 11:43:13 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:14.572 11:43:13 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:14.572 11:43:13 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:14.572 11:43:13 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:14.572 11:43:13 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:14.572 11:43:13 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:14.572 11:43:13 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:14.572 11:43:13 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:14.572 11:43:13 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=77376 00:18:14.572 11:43:13 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.572 11:43:13 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:14.572 11:43:13 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 77376 00:18:14.572 11:43:13 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 77376 ']' 00:18:14.572 11:43:13 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.572 11:43:13 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:14.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.572 11:43:13 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.572 11:43:13 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:14.572 11:43:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.828 [2024-07-25 11:43:13.750149] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:14.828 [2024-07-25 11:43:13.750349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77376 ] 00:18:15.085 [2024-07-25 11:43:13.930138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:15.343 [2024-07-25 11:43:14.174917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.343 [2024-07-25 11:43:14.174957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.281 11:43:14 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:16.281 11:43:14 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:18:16.281 11:43:14 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:16.281 11:43:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.281 11:43:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.281 [2024-07-25 11:43:14.997945] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:16.281 [2024-07-25 11:43:15.001089] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:16.281 11:43:15 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.281 11:43:15 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:16.281 11:43:15 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.281 11:43:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.281 malloc0 00:18:16.281 11:43:15 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.281 11:43:15 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:16.281 11:43:15 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.281 11:43:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.281 [2024-07-25 11:43:15.168133] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:16.281 [2024-07-25 11:43:15.168293] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:16.281 [2024-07-25 11:43:15.168310] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:16.281 [2024-07-25 11:43:15.168325] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:16.281 [2024-07-25 11:43:15.177082] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:16.281 [2024-07-25 11:43:15.177114] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:16.281 [2024-07-25 11:43:15.183959] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:16.281 [2024-07-25 11:43:15.184171] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:16.281 [2024-07-25 11:43:15.195064] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:16.281 1 00:18:16.281 11:43:15 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.281 11:43:15 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:17.243 11:43:16 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77412 00:18:17.244 11:43:16 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:17.244 11:43:16 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:17.501 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.501 fio-3.35 00:18:17.501 Starting 1 process 00:18:22.791 11:43:21 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 77376 00:18:22.791 11:43:21 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:28.051 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 77376 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:28.051 11:43:26 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77522 00:18:28.051 11:43:26 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.051 11:43:26 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:28.051 11:43:26 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77522 00:18:28.051 11:43:26 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 77522 ']' 00:18:28.051 11:43:26 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.051 11:43:26 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:28.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.051 11:43:26 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.051 11:43:26 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:28.051 11:43:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.051 [2024-07-25 11:43:26.349557] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:28.051 [2024-07-25 11:43:26.349758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77522 ] 00:18:28.051 [2024-07-25 11:43:26.528200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:28.051 [2024-07-25 11:43:26.779821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.052 [2024-07-25 11:43:26.779831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.618 11:43:27 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.618 11:43:27 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:18:28.618 11:43:27 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:28.618 11:43:27 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.618 11:43:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.618 [2024-07-25 11:43:27.616950] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:28.618 [2024-07-25 11:43:27.620287] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:28.618 11:43:27 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.618 11:43:27 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:28.618 11:43:27 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.618 11:43:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.876 malloc0 00:18:28.876 11:43:27 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.876 11:43:27 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:28.876 11:43:27 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.876 11:43:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.876 [2024-07-25 11:43:27.793173] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:28.876 [2024-07-25 11:43:27.793242] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:28.876 [2024-07-25 11:43:27.793257] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:28.876 [2024-07-25 11:43:27.801046] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:28.876 [2024-07-25 11:43:27.801097] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:28.876 [2024-07-25 11:43:27.801239] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:28.876 1 00:18:28.876 11:43:27 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.876 11:43:27 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77412 00:18:28.876 [2024-07-25 11:43:27.808974] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:28.876 [2024-07-25 11:43:27.816632] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:28.876 [2024-07-25 11:43:27.824424] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:28.876 [2024-07-25 11:43:27.824473] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:25.089 00:19:25.089 fio_test: (groupid=0, jobs=1): err= 0: pid=77415: Thu Jul 25 11:44:16 2024 00:19:25.089 read: IOPS=17.7k, BW=69.0MiB/s (72.3MB/s)(4138MiB/60003msec) 00:19:25.089 slat (nsec): min=1973, max=1080.2k, avg=6628.48, stdev=2960.17 00:19:25.089 clat (usec): min=1428, max=6629.1k, avg=3526.13, stdev=49016.30 00:19:25.089 lat (usec): min=1439, max=6629.1k, avg=3532.76, stdev=49016.30 00:19:25.089 clat percentiles (usec): 00:19:25.089 | 1.00th=[ 2671], 5.00th=[ 2868], 10.00th=[ 2933], 20.00th=[ 2966], 00:19:25.089 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:19:25.089 | 70.00th=[ 3130], 80.00th=[ 3163], 90.00th=[ 3261], 95.00th=[ 3884], 00:19:25.089 | 99.00th=[ 5735], 99.50th=[ 6849], 99.90th=[ 7767], 99.95th=[ 8225], 00:19:25.089 | 99.99th=[13435] 00:19:25.089 bw ( KiB/s): min= 8000, max=82256, per=100.00%, avg=78573.21, stdev=9239.50, samples=107 00:19:25.089 iops : min= 2000, max=20564, avg=19643.30, stdev=2309.87, samples=107 00:19:25.089 write: IOPS=17.6k, BW=68.9MiB/s (72.2MB/s)(4134MiB/60003msec); 0 zone resets 00:19:25.089 slat (nsec): min=1947, max=221218, avg=6702.54, stdev=2796.70 00:19:25.089 clat (usec): min=1454, max=6629.6k, avg=3712.24, stdev=53874.28 00:19:25.089 lat (usec): min=1473, max=6629.6k, avg=3718.95, stdev=53874.27 00:19:25.089 clat percentiles (usec): 00:19:25.089 | 1.00th=[ 2704], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3097], 00:19:25.089 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3228], 00:19:25.089 | 70.00th=[ 3261], 80.00th=[ 3294], 90.00th=[ 3392], 95.00th=[ 3785], 00:19:25.089 | 99.00th=[ 5735], 99.50th=[ 6915], 99.90th=[ 7832], 99.95th=[ 8225], 00:19:25.089 | 99.99th=[13566] 00:19:25.089 bw ( KiB/s): min= 8464, max=82504, per=100.00%, avg=78486.87, stdev=9192.40, samples=107 00:19:25.089 iops : min= 2116, max=20626, avg=19621.71, stdev=2298.10, samples=107 00:19:25.089 lat (msec) : 2=0.04%, 4=95.52%, 10=4.41%, 20=0.02%, >=2000=0.01% 00:19:25.089 cpu : usr=9.71%, sys=22.28%, ctx=66008, majf=0, minf=13 00:19:25.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:25.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.089 issued rwts: total=1059386,1058326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.089 00:19:25.089 Run status group 0 (all jobs): 00:19:25.089 READ: bw=69.0MiB/s (72.3MB/s), 69.0MiB/s-69.0MiB/s (72.3MB/s-72.3MB/s), io=4138MiB (4339MB), run=60003-60003msec 00:19:25.089 WRITE: bw=68.9MiB/s (72.2MB/s), 68.9MiB/s-68.9MiB/s (72.2MB/s-72.2MB/s), io=4134MiB (4335MB), run=60003-60003msec 00:19:25.089 00:19:25.089 Disk stats (read/write): 00:19:25.089 ublkb1: ios=1057080/1056043, merge=0/0, ticks=3628953/3698116, in_queue=7327070, util=99.95% 00:19:25.089 11:44:16 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.089 [2024-07-25 11:44:16.464977] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:25.089 [2024-07-25 11:44:16.504118] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:25.089 [2024-07-25 11:44:16.504445] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:25.089 [2024-07-25 11:44:16.511948] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:25.089 [2024-07-25 11:44:16.512121] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:25.089 [2024-07-25 11:44:16.512142] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.089 11:44:16 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.089 [2024-07-25 11:44:16.517108] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:19:25.089 [2024-07-25 11:44:16.524069] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:19:25.089 [2024-07-25 11:44:16.524116] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.089 11:44:16 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:25.089 11:44:16 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:25.089 11:44:16 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77522 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 77522 ']' 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 77522 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77522 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:25.089 killing process with pid 77522 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77522' 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@969 -- # kill 77522 00:19:25.089 11:44:16 ublk_recovery -- common/autotest_common.sh@974 -- # wait 77522 00:19:25.089 [2024-07-25 11:44:17.625307] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:19:25.089 [2024-07-25 11:44:17.625397] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:19:25.089 00:19:25.089 real 1m5.527s 00:19:25.089 user 1m48.704s 00:19:25.089 sys 0m30.402s 00:19:25.089 11:44:19 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.089 11:44:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.089 ************************************ 00:19:25.089 END TEST ublk_recovery 00:19:25.089 ************************************ 00:19:25.089 11:44:19 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@264 -- # timing_exit lib 00:19:25.089 11:44:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.089 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:19:25.089 11:44:19 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@343 -- # '[' 1 -eq 1 ']' 00:19:25.089 11:44:19 -- spdk/autotest.sh@344 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:25.089 11:44:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:25.089 11:44:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.089 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:19:25.089 ************************************ 00:19:25.089 START TEST ftl 00:19:25.089 ************************************ 00:19:25.089 11:44:19 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:25.089 * Looking for test storage... 00:19:25.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.089 11:44:19 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:25.089 11:44:19 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:25.089 11:44:19 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.089 11:44:19 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.089 11:44:19 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:25.089 11:44:19 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:25.089 11:44:19 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.089 11:44:19 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:25.089 11:44:19 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:25.089 11:44:19 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.089 11:44:19 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.089 11:44:19 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:25.089 11:44:19 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:25.089 11:44:19 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:25.089 11:44:19 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:25.089 11:44:19 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:25.089 11:44:19 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:25.089 11:44:19 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.089 11:44:19 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.089 11:44:19 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:25.089 11:44:19 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:25.089 11:44:19 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:25.089 11:44:19 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:25.089 11:44:19 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:25.089 11:44:19 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:25.089 11:44:19 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:25.089 11:44:19 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:25.089 11:44:19 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.089 11:44:19 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.089 11:44:19 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.089 11:44:19 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:25.090 11:44:19 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:25.090 11:44:19 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:25.090 11:44:19 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:25.090 11:44:19 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:25.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:25.090 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:25.090 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:25.090 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:25.090 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:25.090 11:44:19 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=78300 00:19:25.090 11:44:19 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:25.090 11:44:19 ftl -- ftl/ftl.sh@38 -- # waitforlisten 78300 00:19:25.090 11:44:19 ftl -- common/autotest_common.sh@831 -- # '[' -z 78300 ']' 00:19:25.090 11:44:19 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.090 11:44:19 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.090 11:44:19 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.090 11:44:19 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.090 11:44:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:25.090 [2024-07-25 11:44:19.876004] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:25.090 [2024-07-25 11:44:19.876216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78300 ] 00:19:25.090 [2024-07-25 11:44:20.057301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.090 [2024-07-25 11:44:20.341046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.090 11:44:20 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.090 11:44:20 ftl -- common/autotest_common.sh@864 -- # return 0 00:19:25.090 11:44:20 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:25.090 11:44:21 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@50 -- # break 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:25.090 11:44:22 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:25.090 11:44:23 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:25.090 11:44:23 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:25.090 11:44:23 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:25.090 11:44:23 ftl -- ftl/ftl.sh@63 -- # break 00:19:25.090 11:44:23 ftl -- ftl/ftl.sh@66 -- # killprocess 78300 00:19:25.090 11:44:23 ftl -- common/autotest_common.sh@950 -- # '[' -z 78300 ']' 00:19:25.090 11:44:23 ftl -- common/autotest_common.sh@954 -- # kill -0 78300 00:19:25.090 11:44:23 ftl -- common/autotest_common.sh@955 -- # uname 00:19:25.090 11:44:23 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.090 11:44:23 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78300 00:19:25.090 11:44:23 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:25.090 11:44:23 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:25.090 killing process with pid 78300 00:19:25.090 11:44:23 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78300' 00:19:25.090 11:44:23 ftl -- common/autotest_common.sh@969 -- # kill 78300 00:19:25.090 11:44:23 ftl -- common/autotest_common.sh@974 -- # wait 78300 00:19:26.470 11:44:25 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:26.470 11:44:25 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:26.470 11:44:25 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:26.470 11:44:25 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:26.470 11:44:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:26.470 ************************************ 00:19:26.470 START TEST ftl_fio_basic 00:19:26.470 ************************************ 00:19:26.470 11:44:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:26.728 * Looking for test storage... 00:19:26.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:26.728 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78442 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78442 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 78442 ']' 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:26.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:26.729 11:44:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:26.729 [2024-07-25 11:44:25.715802] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:26.729 [2024-07-25 11:44:25.716010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78442 ] 00:19:26.987 [2024-07-25 11:44:25.892912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:27.247 [2024-07-25 11:44:26.141028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.247 [2024-07-25 11:44:26.141144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.247 [2024-07-25 11:44:26.141146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.180 11:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.180 11:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:19:28.180 11:44:26 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:28.180 11:44:26 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:28.180 11:44:26 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:28.180 11:44:26 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:28.180 11:44:26 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:28.180 11:44:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:28.438 11:44:27 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:28.438 11:44:27 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:28.438 11:44:27 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:28.438 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:28.438 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:28.438 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:28.438 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:28.438 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:28.695 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:28.695 { 00:19:28.695 "name": "nvme0n1", 00:19:28.695 "aliases": [ 00:19:28.695 "a6c1820b-a924-4457-8467-45bebc171e25" 00:19:28.695 ], 00:19:28.695 "product_name": "NVMe disk", 00:19:28.695 "block_size": 4096, 00:19:28.695 "num_blocks": 1310720, 00:19:28.695 "uuid": "a6c1820b-a924-4457-8467-45bebc171e25", 00:19:28.695 "assigned_rate_limits": { 00:19:28.695 "rw_ios_per_sec": 0, 00:19:28.695 "rw_mbytes_per_sec": 0, 00:19:28.695 "r_mbytes_per_sec": 0, 00:19:28.695 "w_mbytes_per_sec": 0 00:19:28.695 }, 00:19:28.695 "claimed": false, 00:19:28.695 "zoned": false, 00:19:28.695 "supported_io_types": { 00:19:28.695 "read": true, 00:19:28.695 "write": true, 00:19:28.695 "unmap": true, 00:19:28.695 "flush": true, 00:19:28.695 "reset": true, 00:19:28.695 "nvme_admin": true, 00:19:28.695 "nvme_io": true, 00:19:28.695 "nvme_io_md": false, 00:19:28.695 "write_zeroes": true, 00:19:28.695 "zcopy": false, 00:19:28.695 "get_zone_info": false, 00:19:28.695 "zone_management": false, 00:19:28.695 "zone_append": false, 00:19:28.695 "compare": true, 00:19:28.695 "compare_and_write": false, 00:19:28.695 "abort": true, 00:19:28.695 "seek_hole": false, 00:19:28.695 "seek_data": false, 00:19:28.695 "copy": true, 00:19:28.695 "nvme_iov_md": false 00:19:28.695 }, 00:19:28.695 "driver_specific": { 00:19:28.695 "nvme": [ 00:19:28.695 { 00:19:28.695 "pci_address": "0000:00:11.0", 00:19:28.695 "trid": { 00:19:28.695 "trtype": "PCIe", 00:19:28.695 "traddr": "0000:00:11.0" 00:19:28.695 }, 00:19:28.695 "ctrlr_data": { 00:19:28.695 "cntlid": 0, 00:19:28.695 "vendor_id": "0x1b36", 00:19:28.695 "model_number": "QEMU NVMe Ctrl", 00:19:28.695 "serial_number": "12341", 00:19:28.695 "firmware_revision": "8.0.0", 00:19:28.695 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:28.695 "oacs": { 00:19:28.695 "security": 0, 00:19:28.695 "format": 1, 00:19:28.695 "firmware": 0, 00:19:28.695 "ns_manage": 1 00:19:28.695 }, 00:19:28.695 "multi_ctrlr": false, 00:19:28.695 "ana_reporting": false 00:19:28.696 }, 00:19:28.696 "vs": { 00:19:28.696 "nvme_version": "1.4" 00:19:28.696 }, 00:19:28.696 "ns_data": { 00:19:28.696 "id": 1, 00:19:28.696 "can_share": false 00:19:28.696 } 00:19:28.696 } 00:19:28.696 ], 00:19:28.696 "mp_policy": "active_passive" 00:19:28.696 } 00:19:28.696 } 00:19:28.696 ]' 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:28.696 11:44:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:28.953 11:44:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:28.953 11:44:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:29.211 11:44:28 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=3316d9d8-f4c5-4678-926a-0d160f3d2b3a 00:19:29.211 11:44:28 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3316d9d8-f4c5-4678-926a-0d160f3d2b3a 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:29.469 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:29.728 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:29.728 { 00:19:29.728 "name": "a31f1de0-ab07-4b25-8c3e-b36adaaffcd3", 00:19:29.728 "aliases": [ 00:19:29.728 "lvs/nvme0n1p0" 00:19:29.728 ], 00:19:29.728 "product_name": "Logical Volume", 00:19:29.728 "block_size": 4096, 00:19:29.728 "num_blocks": 26476544, 00:19:29.728 "uuid": "a31f1de0-ab07-4b25-8c3e-b36adaaffcd3", 00:19:29.728 "assigned_rate_limits": { 00:19:29.728 "rw_ios_per_sec": 0, 00:19:29.728 "rw_mbytes_per_sec": 0, 00:19:29.728 "r_mbytes_per_sec": 0, 00:19:29.728 "w_mbytes_per_sec": 0 00:19:29.728 }, 00:19:29.728 "claimed": false, 00:19:29.728 "zoned": false, 00:19:29.728 "supported_io_types": { 00:19:29.728 "read": true, 00:19:29.728 "write": true, 00:19:29.728 "unmap": true, 00:19:29.728 "flush": false, 00:19:29.728 "reset": true, 00:19:29.728 "nvme_admin": false, 00:19:29.728 "nvme_io": false, 00:19:29.728 "nvme_io_md": false, 00:19:29.728 "write_zeroes": true, 00:19:29.728 "zcopy": false, 00:19:29.728 "get_zone_info": false, 00:19:29.728 "zone_management": false, 00:19:29.728 "zone_append": false, 00:19:29.728 "compare": false, 00:19:29.728 "compare_and_write": false, 00:19:29.728 "abort": false, 00:19:29.728 "seek_hole": true, 00:19:29.728 "seek_data": true, 00:19:29.728 "copy": false, 00:19:29.728 "nvme_iov_md": false 00:19:29.728 }, 00:19:29.728 "driver_specific": { 00:19:29.728 "lvol": { 00:19:29.728 "lvol_store_uuid": "3316d9d8-f4c5-4678-926a-0d160f3d2b3a", 00:19:29.728 "base_bdev": "nvme0n1", 00:19:29.728 "thin_provision": true, 00:19:29.728 "num_allocated_clusters": 0, 00:19:29.728 "snapshot": false, 00:19:29.728 "clone": false, 00:19:29.728 "esnap_clone": false 00:19:29.728 } 00:19:29.728 } 00:19:29.728 } 00:19:29.728 ]' 00:19:29.728 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:29.728 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:29.728 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:29.987 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:29.987 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:29.987 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:29.987 11:44:28 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:29.987 11:44:28 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:29.987 11:44:28 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:30.245 11:44:29 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:30.245 11:44:29 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:30.245 11:44:29 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:30.245 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:30.245 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:30.245 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:30.245 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:30.245 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:30.504 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:30.504 { 00:19:30.504 "name": "a31f1de0-ab07-4b25-8c3e-b36adaaffcd3", 00:19:30.504 "aliases": [ 00:19:30.504 "lvs/nvme0n1p0" 00:19:30.504 ], 00:19:30.504 "product_name": "Logical Volume", 00:19:30.504 "block_size": 4096, 00:19:30.504 "num_blocks": 26476544, 00:19:30.504 "uuid": "a31f1de0-ab07-4b25-8c3e-b36adaaffcd3", 00:19:30.504 "assigned_rate_limits": { 00:19:30.504 "rw_ios_per_sec": 0, 00:19:30.504 "rw_mbytes_per_sec": 0, 00:19:30.504 "r_mbytes_per_sec": 0, 00:19:30.504 "w_mbytes_per_sec": 0 00:19:30.504 }, 00:19:30.504 "claimed": false, 00:19:30.504 "zoned": false, 00:19:30.504 "supported_io_types": { 00:19:30.504 "read": true, 00:19:30.504 "write": true, 00:19:30.504 "unmap": true, 00:19:30.504 "flush": false, 00:19:30.504 "reset": true, 00:19:30.504 "nvme_admin": false, 00:19:30.504 "nvme_io": false, 00:19:30.504 "nvme_io_md": false, 00:19:30.504 "write_zeroes": true, 00:19:30.504 "zcopy": false, 00:19:30.504 "get_zone_info": false, 00:19:30.504 "zone_management": false, 00:19:30.504 "zone_append": false, 00:19:30.504 "compare": false, 00:19:30.504 "compare_and_write": false, 00:19:30.504 "abort": false, 00:19:30.504 "seek_hole": true, 00:19:30.504 "seek_data": true, 00:19:30.504 "copy": false, 00:19:30.504 "nvme_iov_md": false 00:19:30.504 }, 00:19:30.504 "driver_specific": { 00:19:30.504 "lvol": { 00:19:30.504 "lvol_store_uuid": "3316d9d8-f4c5-4678-926a-0d160f3d2b3a", 00:19:30.504 "base_bdev": "nvme0n1", 00:19:30.504 "thin_provision": true, 00:19:30.504 "num_allocated_clusters": 0, 00:19:30.504 "snapshot": false, 00:19:30.504 "clone": false, 00:19:30.504 "esnap_clone": false 00:19:30.504 } 00:19:30.504 } 00:19:30.504 } 00:19:30.504 ]' 00:19:30.504 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:30.504 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:30.504 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:30.504 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:30.504 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:30.504 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:30.504 11:44:29 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:30.504 11:44:29 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:30.763 11:44:29 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:30.763 11:44:29 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:30.763 11:44:29 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:30.763 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:30.763 11:44:29 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:30.763 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:30.763 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:30.763 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:30.763 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:30.763 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 00:19:31.021 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:31.021 { 00:19:31.021 "name": "a31f1de0-ab07-4b25-8c3e-b36adaaffcd3", 00:19:31.021 "aliases": [ 00:19:31.021 "lvs/nvme0n1p0" 00:19:31.021 ], 00:19:31.021 "product_name": "Logical Volume", 00:19:31.021 "block_size": 4096, 00:19:31.021 "num_blocks": 26476544, 00:19:31.021 "uuid": "a31f1de0-ab07-4b25-8c3e-b36adaaffcd3", 00:19:31.021 "assigned_rate_limits": { 00:19:31.021 "rw_ios_per_sec": 0, 00:19:31.021 "rw_mbytes_per_sec": 0, 00:19:31.021 "r_mbytes_per_sec": 0, 00:19:31.021 "w_mbytes_per_sec": 0 00:19:31.021 }, 00:19:31.021 "claimed": false, 00:19:31.021 "zoned": false, 00:19:31.021 "supported_io_types": { 00:19:31.021 "read": true, 00:19:31.021 "write": true, 00:19:31.021 "unmap": true, 00:19:31.021 "flush": false, 00:19:31.021 "reset": true, 00:19:31.021 "nvme_admin": false, 00:19:31.021 "nvme_io": false, 00:19:31.021 "nvme_io_md": false, 00:19:31.021 "write_zeroes": true, 00:19:31.021 "zcopy": false, 00:19:31.021 "get_zone_info": false, 00:19:31.021 "zone_management": false, 00:19:31.021 "zone_append": false, 00:19:31.021 "compare": false, 00:19:31.021 "compare_and_write": false, 00:19:31.021 "abort": false, 00:19:31.021 "seek_hole": true, 00:19:31.021 "seek_data": true, 00:19:31.021 "copy": false, 00:19:31.021 "nvme_iov_md": false 00:19:31.021 }, 00:19:31.021 "driver_specific": { 00:19:31.021 "lvol": { 00:19:31.021 "lvol_store_uuid": "3316d9d8-f4c5-4678-926a-0d160f3d2b3a", 00:19:31.021 "base_bdev": "nvme0n1", 00:19:31.021 "thin_provision": true, 00:19:31.021 "num_allocated_clusters": 0, 00:19:31.021 "snapshot": false, 00:19:31.021 "clone": false, 00:19:31.021 "esnap_clone": false 00:19:31.021 } 00:19:31.021 } 00:19:31.021 } 00:19:31.021 ]' 00:19:31.021 11:44:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:31.021 11:44:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:31.021 11:44:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:31.021 11:44:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:31.021 11:44:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:31.021 11:44:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:31.021 11:44:30 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:31.021 11:44:30 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:31.021 11:44:30 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a31f1de0-ab07-4b25-8c3e-b36adaaffcd3 -c nvc0n1p0 --l2p_dram_limit 60 00:19:31.280 [2024-07-25 11:44:30.295079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.280 [2024-07-25 11:44:30.295200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:31.280 [2024-07-25 11:44:30.295225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:31.280 [2024-07-25 11:44:30.295243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.280 [2024-07-25 11:44:30.295350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.280 [2024-07-25 11:44:30.295374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.280 [2024-07-25 11:44:30.295388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:31.280 [2024-07-25 11:44:30.295404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.280 [2024-07-25 11:44:30.295454] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:31.280 [2024-07-25 11:44:30.296633] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:31.280 [2024-07-25 11:44:30.296677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.280 [2024-07-25 11:44:30.296701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.280 [2024-07-25 11:44:30.296716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:19:31.280 [2024-07-25 11:44:30.296730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.280 [2024-07-25 11:44:30.297001] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0d61488d-6e24-4888-9a88-40bdfc647701 00:19:31.280 [2024-07-25 11:44:30.298947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.280 [2024-07-25 11:44:30.298984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:31.280 [2024-07-25 11:44:30.299005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:31.280 [2024-07-25 11:44:30.299017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.280 [2024-07-25 11:44:30.309152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.280 [2024-07-25 11:44:30.309236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.280 [2024-07-25 11:44:30.309267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.966 ms 00:19:31.280 [2024-07-25 11:44:30.309280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.280 [2024-07-25 11:44:30.309481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.280 [2024-07-25 11:44:30.309508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.280 [2024-07-25 11:44:30.309526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:19:31.280 [2024-07-25 11:44:30.309542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.280 [2024-07-25 11:44:30.309712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.280 [2024-07-25 11:44:30.309742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:31.280 [2024-07-25 11:44:30.309762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:31.280 [2024-07-25 11:44:30.309778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.280 [2024-07-25 11:44:30.309838] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:31.280 [2024-07-25 11:44:30.315209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.280 [2024-07-25 11:44:30.315271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.280 [2024-07-25 11:44:30.315289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.394 ms 00:19:31.280 [2024-07-25 11:44:30.315304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.280 [2024-07-25 11:44:30.315376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.280 [2024-07-25 11:44:30.315397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:31.280 [2024-07-25 11:44:30.315410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:31.280 [2024-07-25 11:44:30.315424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.280 [2024-07-25 11:44:30.315519] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:31.280 [2024-07-25 11:44:30.315763] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:31.280 [2024-07-25 11:44:30.315804] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:31.281 [2024-07-25 11:44:30.315831] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:31.281 [2024-07-25 11:44:30.315848] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:31.281 [2024-07-25 11:44:30.315886] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:31.281 [2024-07-25 11:44:30.315900] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:31.281 [2024-07-25 11:44:30.315915] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:31.281 [2024-07-25 11:44:30.315948] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:31.281 [2024-07-25 11:44:30.315964] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:31.281 [2024-07-25 11:44:30.315977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.281 [2024-07-25 11:44:30.315992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:31.281 [2024-07-25 11:44:30.316006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:19:31.281 [2024-07-25 11:44:30.316021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.281 [2024-07-25 11:44:30.316138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.281 [2024-07-25 11:44:30.316162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:31.281 [2024-07-25 11:44:30.316176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:31.281 [2024-07-25 11:44:30.316191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.281 [2024-07-25 11:44:30.316352] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:31.281 [2024-07-25 11:44:30.316394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:31.281 [2024-07-25 11:44:30.316410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.281 [2024-07-25 11:44:30.316428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.281 [2024-07-25 11:44:30.316450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:31.281 [2024-07-25 11:44:30.316469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:31.281 [2024-07-25 11:44:30.316481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:31.281 [2024-07-25 11:44:30.316495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:31.281 [2024-07-25 11:44:30.316505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:31.281 [2024-07-25 11:44:30.316520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.281 [2024-07-25 11:44:30.316536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:31.281 [2024-07-25 11:44:30.316560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:31.281 [2024-07-25 11:44:30.316572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.281 [2024-07-25 11:44:30.316586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:31.281 [2024-07-25 11:44:30.316597] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:31.281 [2024-07-25 11:44:30.316619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.281 [2024-07-25 11:44:30.316630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:31.281 [2024-07-25 11:44:30.316656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:31.281 [2024-07-25 11:44:30.316677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.281 [2024-07-25 11:44:30.316700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:31.281 [2024-07-25 11:44:30.316712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:31.281 [2024-07-25 11:44:30.316726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.281 [2024-07-25 11:44:30.316737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:31.281 [2024-07-25 11:44:30.316750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:31.281 [2024-07-25 11:44:30.316761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.281 [2024-07-25 11:44:30.316776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:31.281 [2024-07-25 11:44:30.316787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:31.281 [2024-07-25 11:44:30.316809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.281 [2024-07-25 11:44:30.316829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:31.281 [2024-07-25 11:44:30.316854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:31.281 [2024-07-25 11:44:30.316867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.281 [2024-07-25 11:44:30.316881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:31.281 [2024-07-25 11:44:30.316892] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:31.281 [2024-07-25 11:44:30.316909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.281 [2024-07-25 11:44:30.316935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:31.281 [2024-07-25 11:44:30.316954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:31.281 [2024-07-25 11:44:30.316966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.281 [2024-07-25 11:44:30.316979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:31.281 [2024-07-25 11:44:30.316990] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:31.281 [2024-07-25 11:44:30.317006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.281 [2024-07-25 11:44:30.317019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:31.281 [2024-07-25 11:44:30.317043] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:31.281 [2024-07-25 11:44:30.317064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.281 [2024-07-25 11:44:30.317089] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:31.281 [2024-07-25 11:44:30.317106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:31.281 [2024-07-25 11:44:30.317144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.281 [2024-07-25 11:44:30.317156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.281 [2024-07-25 11:44:30.317179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:31.281 [2024-07-25 11:44:30.317192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:31.281 [2024-07-25 11:44:30.317209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:31.281 [2024-07-25 11:44:30.317225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:31.281 [2024-07-25 11:44:30.317249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:31.281 [2024-07-25 11:44:30.317271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:31.281 [2024-07-25 11:44:30.317299] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:31.281 [2024-07-25 11:44:30.317316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.281 [2024-07-25 11:44:30.317337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:31.281 [2024-07-25 11:44:30.317350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:31.281 [2024-07-25 11:44:30.317368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:31.281 [2024-07-25 11:44:30.317380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:31.281 [2024-07-25 11:44:30.317394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:31.281 [2024-07-25 11:44:30.317406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:31.281 [2024-07-25 11:44:30.317420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:31.281 [2024-07-25 11:44:30.317432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:31.281 [2024-07-25 11:44:30.317453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:31.281 [2024-07-25 11:44:30.317475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:31.281 [2024-07-25 11:44:30.317513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:31.281 [2024-07-25 11:44:30.317526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:31.281 [2024-07-25 11:44:30.317540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:31.281 [2024-07-25 11:44:30.317552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:31.282 [2024-07-25 11:44:30.317567] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:31.282 [2024-07-25 11:44:30.317581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.282 [2024-07-25 11:44:30.317597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:31.282 [2024-07-25 11:44:30.317613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:31.282 [2024-07-25 11:44:30.317639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:31.282 [2024-07-25 11:44:30.317662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:31.282 [2024-07-25 11:44:30.317688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.282 [2024-07-25 11:44:30.317702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:31.282 [2024-07-25 11:44:30.317718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:19:31.282 [2024-07-25 11:44:30.317730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.282 [2024-07-25 11:44:30.317855] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:31.282 [2024-07-25 11:44:30.317877] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:34.562 [2024-07-25 11:44:33.536353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.562 [2024-07-25 11:44:33.536436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:34.562 [2024-07-25 11:44:33.536463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3218.501 ms 00:19:34.562 [2024-07-25 11:44:33.536477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.562 [2024-07-25 11:44:33.575971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.562 [2024-07-25 11:44:33.576040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.562 [2024-07-25 11:44:33.576075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.145 ms 00:19:34.562 [2024-07-25 11:44:33.576089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.562 [2024-07-25 11:44:33.576365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.562 [2024-07-25 11:44:33.576408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:34.562 [2024-07-25 11:44:33.576430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:19:34.562 [2024-07-25 11:44:33.576448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.635021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.635097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.821 [2024-07-25 11:44:33.635130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.432 ms 00:19:34.821 [2024-07-25 11:44:33.635149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.635259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.635304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.821 [2024-07-25 11:44:33.635351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:34.821 [2024-07-25 11:44:33.635371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.636156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.636202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.821 [2024-07-25 11:44:33.636227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:19:34.821 [2024-07-25 11:44:33.636243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.636524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.636578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.821 [2024-07-25 11:44:33.636605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:19:34.821 [2024-07-25 11:44:33.636622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.659345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.659397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.821 [2024-07-25 11:44:33.659420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.662 ms 00:19:34.821 [2024-07-25 11:44:33.659434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.673862] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:34.821 [2024-07-25 11:44:33.695678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.695798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:34.821 [2024-07-25 11:44:33.695823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.062 ms 00:19:34.821 [2024-07-25 11:44:33.695839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.763677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.763787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:34.821 [2024-07-25 11:44:33.763811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.741 ms 00:19:34.821 [2024-07-25 11:44:33.763827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.764150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.764194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:34.821 [2024-07-25 11:44:33.764211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:19:34.821 [2024-07-25 11:44:33.764230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.794908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.794969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:34.821 [2024-07-25 11:44:33.794988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.566 ms 00:19:34.821 [2024-07-25 11:44:33.795004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.824825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.824875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:34.821 [2024-07-25 11:44:33.824894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.759 ms 00:19:34.821 [2024-07-25 11:44:33.824909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.821 [2024-07-25 11:44:33.825864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.821 [2024-07-25 11:44:33.825907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:34.821 [2024-07-25 11:44:33.825937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:19:34.821 [2024-07-25 11:44:33.825954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.079 [2024-07-25 11:44:33.917271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.079 [2024-07-25 11:44:33.917366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:35.079 [2024-07-25 11:44:33.917388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.225 ms 00:19:35.079 [2024-07-25 11:44:33.917409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.079 [2024-07-25 11:44:33.949754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.079 [2024-07-25 11:44:33.949808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:35.079 [2024-07-25 11:44:33.949829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.282 ms 00:19:35.079 [2024-07-25 11:44:33.949845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.079 [2024-07-25 11:44:33.980212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.079 [2024-07-25 11:44:33.980263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:35.079 [2024-07-25 11:44:33.980288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.304 ms 00:19:35.079 [2024-07-25 11:44:33.980306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.079 [2024-07-25 11:44:34.011190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.079 [2024-07-25 11:44:34.011245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.079 [2024-07-25 11:44:34.011263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.828 ms 00:19:35.079 [2024-07-25 11:44:34.011279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.079 [2024-07-25 11:44:34.011349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.079 [2024-07-25 11:44:34.011371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.079 [2024-07-25 11:44:34.011385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:35.079 [2024-07-25 11:44:34.011403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.079 [2024-07-25 11:44:34.011564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.079 [2024-07-25 11:44:34.011592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.079 [2024-07-25 11:44:34.011607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:35.079 [2024-07-25 11:44:34.011631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.079 [2024-07-25 11:44:34.013158] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3717.470 ms, result 0 00:19:35.079 { 00:19:35.079 "name": "ftl0", 00:19:35.079 "uuid": "0d61488d-6e24-4888-9a88-40bdfc647701" 00:19:35.079 } 00:19:35.080 11:44:34 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:35.080 11:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:19:35.080 11:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:35.080 11:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:19:35.080 11:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:35.080 11:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:35.080 11:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:35.338 11:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:35.596 [ 00:19:35.596 { 00:19:35.596 "name": "ftl0", 00:19:35.596 "aliases": [ 00:19:35.596 "0d61488d-6e24-4888-9a88-40bdfc647701" 00:19:35.596 ], 00:19:35.596 "product_name": "FTL disk", 00:19:35.596 "block_size": 4096, 00:19:35.596 "num_blocks": 20971520, 00:19:35.596 "uuid": "0d61488d-6e24-4888-9a88-40bdfc647701", 00:19:35.596 "assigned_rate_limits": { 00:19:35.596 "rw_ios_per_sec": 0, 00:19:35.596 "rw_mbytes_per_sec": 0, 00:19:35.596 "r_mbytes_per_sec": 0, 00:19:35.596 "w_mbytes_per_sec": 0 00:19:35.596 }, 00:19:35.596 "claimed": false, 00:19:35.596 "zoned": false, 00:19:35.596 "supported_io_types": { 00:19:35.596 "read": true, 00:19:35.596 "write": true, 00:19:35.596 "unmap": true, 00:19:35.596 "flush": true, 00:19:35.596 "reset": false, 00:19:35.596 "nvme_admin": false, 00:19:35.596 "nvme_io": false, 00:19:35.596 "nvme_io_md": false, 00:19:35.596 "write_zeroes": true, 00:19:35.596 "zcopy": false, 00:19:35.596 "get_zone_info": false, 00:19:35.596 "zone_management": false, 00:19:35.596 "zone_append": false, 00:19:35.596 "compare": false, 00:19:35.596 "compare_and_write": false, 00:19:35.596 "abort": false, 00:19:35.596 "seek_hole": false, 00:19:35.596 "seek_data": false, 00:19:35.596 "copy": false, 00:19:35.596 "nvme_iov_md": false 00:19:35.596 }, 00:19:35.596 "driver_specific": { 00:19:35.596 "ftl": { 00:19:35.596 "base_bdev": "a31f1de0-ab07-4b25-8c3e-b36adaaffcd3", 00:19:35.596 "cache": "nvc0n1p0" 00:19:35.596 } 00:19:35.596 } 00:19:35.597 } 00:19:35.597 ] 00:19:35.597 11:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:19:35.597 11:44:34 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:35.597 11:44:34 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:35.854 11:44:34 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:35.854 11:44:34 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:36.112 [2024-07-25 11:44:35.025815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.112 [2024-07-25 11:44:35.025894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:36.112 [2024-07-25 11:44:35.025961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:36.112 [2024-07-25 11:44:35.025976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.112 [2024-07-25 11:44:35.026028] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.112 [2024-07-25 11:44:35.029832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.112 [2024-07-25 11:44:35.029875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:36.112 [2024-07-25 11:44:35.029908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.778 ms 00:19:36.112 [2024-07-25 11:44:35.029923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.113 [2024-07-25 11:44:35.030443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.113 [2024-07-25 11:44:35.030494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:36.113 [2024-07-25 11:44:35.030515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:19:36.113 [2024-07-25 11:44:35.030535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.113 [2024-07-25 11:44:35.033858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.113 [2024-07-25 11:44:35.033897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:36.113 [2024-07-25 11:44:35.033913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.291 ms 00:19:36.113 [2024-07-25 11:44:35.033939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.113 [2024-07-25 11:44:35.040943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.113 [2024-07-25 11:44:35.041008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:36.113 [2024-07-25 11:44:35.041040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.966 ms 00:19:36.113 [2024-07-25 11:44:35.041061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.113 [2024-07-25 11:44:35.074458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.113 [2024-07-25 11:44:35.074533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:36.113 [2024-07-25 11:44:35.074554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.267 ms 00:19:36.113 [2024-07-25 11:44:35.074569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.113 [2024-07-25 11:44:35.094205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.113 [2024-07-25 11:44:35.094302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:36.113 [2024-07-25 11:44:35.094324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.570 ms 00:19:36.113 [2024-07-25 11:44:35.094339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.113 [2024-07-25 11:44:35.094769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.113 [2024-07-25 11:44:35.094806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:36.113 [2024-07-25 11:44:35.094831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:19:36.113 [2024-07-25 11:44:35.094849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.113 [2024-07-25 11:44:35.126505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.113 [2024-07-25 11:44:35.126563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:36.113 [2024-07-25 11:44:35.126598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.622 ms 00:19:36.113 [2024-07-25 11:44:35.126613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.113 [2024-07-25 11:44:35.158173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.113 [2024-07-25 11:44:35.158251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:36.113 [2024-07-25 11:44:35.158271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.452 ms 00:19:36.113 [2024-07-25 11:44:35.158287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.372 [2024-07-25 11:44:35.189669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.372 [2024-07-25 11:44:35.189771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:36.372 [2024-07-25 11:44:35.189793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.321 ms 00:19:36.372 [2024-07-25 11:44:35.189808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.372 [2024-07-25 11:44:35.220492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.372 [2024-07-25 11:44:35.220546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:36.372 [2024-07-25 11:44:35.220565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.467 ms 00:19:36.372 [2024-07-25 11:44:35.220580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.372 [2024-07-25 11:44:35.220688] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:36.372 [2024-07-25 11:44:35.220726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:36.372 [2024-07-25 11:44:35.220982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.220996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.221993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:36.373 [2024-07-25 11:44:35.222306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:36.374 [2024-07-25 11:44:35.222329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:36.374 [2024-07-25 11:44:35.222347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:36.374 [2024-07-25 11:44:35.222359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:36.374 [2024-07-25 11:44:35.222374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:36.374 [2024-07-25 11:44:35.222392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:36.374 [2024-07-25 11:44:35.222421] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:36.374 [2024-07-25 11:44:35.222434] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d61488d-6e24-4888-9a88-40bdfc647701 00:19:36.374 [2024-07-25 11:44:35.222450] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:36.374 [2024-07-25 11:44:35.222465] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:36.374 [2024-07-25 11:44:35.222483] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:36.374 [2024-07-25 11:44:35.222495] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:36.374 [2024-07-25 11:44:35.222512] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:36.374 [2024-07-25 11:44:35.222532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:36.374 [2024-07-25 11:44:35.222557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:36.374 [2024-07-25 11:44:35.222570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:36.374 [2024-07-25 11:44:35.222583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:36.374 [2024-07-25 11:44:35.222596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.374 [2024-07-25 11:44:35.222611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:36.374 [2024-07-25 11:44:35.222625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:19:36.374 [2024-07-25 11:44:35.222639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.374 [2024-07-25 11:44:35.239973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.374 [2024-07-25 11:44:35.240039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:36.374 [2024-07-25 11:44:35.240058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.223 ms 00:19:36.374 [2024-07-25 11:44:35.240074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.374 [2024-07-25 11:44:35.240620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.374 [2024-07-25 11:44:35.240660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:36.374 [2024-07-25 11:44:35.240677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:19:36.374 [2024-07-25 11:44:35.240692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.374 [2024-07-25 11:44:35.300885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.374 [2024-07-25 11:44:35.300980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.374 [2024-07-25 11:44:35.301009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.374 [2024-07-25 11:44:35.301026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.374 [2024-07-25 11:44:35.301154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.374 [2024-07-25 11:44:35.301182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.374 [2024-07-25 11:44:35.301204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.374 [2024-07-25 11:44:35.301226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.374 [2024-07-25 11:44:35.301414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.374 [2024-07-25 11:44:35.301442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.374 [2024-07-25 11:44:35.301456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.374 [2024-07-25 11:44:35.301478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.374 [2024-07-25 11:44:35.301513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.374 [2024-07-25 11:44:35.301534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.374 [2024-07-25 11:44:35.301547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.374 [2024-07-25 11:44:35.301562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.374 [2024-07-25 11:44:35.415300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.374 [2024-07-25 11:44:35.415393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.374 [2024-07-25 11:44:35.415414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.374 [2024-07-25 11:44:35.415430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.632 [2024-07-25 11:44:35.503496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.632 [2024-07-25 11:44:35.503594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.632 [2024-07-25 11:44:35.503615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.632 [2024-07-25 11:44:35.503632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.632 [2024-07-25 11:44:35.503816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.632 [2024-07-25 11:44:35.503857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.632 [2024-07-25 11:44:35.503879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.632 [2024-07-25 11:44:35.503903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.632 [2024-07-25 11:44:35.504032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.632 [2024-07-25 11:44:35.504061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.632 [2024-07-25 11:44:35.504076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.632 [2024-07-25 11:44:35.504091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.632 [2024-07-25 11:44:35.504242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.632 [2024-07-25 11:44:35.504297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.632 [2024-07-25 11:44:35.504319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.632 [2024-07-25 11:44:35.504339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.632 [2024-07-25 11:44:35.504432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.632 [2024-07-25 11:44:35.504461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:36.632 [2024-07-25 11:44:35.504475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.632 [2024-07-25 11:44:35.504489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.632 [2024-07-25 11:44:35.504558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.632 [2024-07-25 11:44:35.504577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.632 [2024-07-25 11:44:35.504594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.632 [2024-07-25 11:44:35.504608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.632 [2024-07-25 11:44:35.504697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.632 [2024-07-25 11:44:35.504726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.632 [2024-07-25 11:44:35.504740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.632 [2024-07-25 11:44:35.504754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.633 [2024-07-25 11:44:35.505006] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 479.133 ms, result 0 00:19:36.633 true 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78442 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 78442 ']' 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 78442 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78442 00:19:36.633 killing process with pid 78442 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78442' 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 78442 00:19:36.633 11:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 78442 00:19:41.897 11:44:40 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:19:41.898 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:42.156 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:42.156 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:42.156 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:19:42.156 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:42.156 11:44:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:42.157 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:42.157 fio-3.35 00:19:42.157 Starting 1 thread 00:19:48.718 00:19:48.718 test: (groupid=0, jobs=1): err= 0: pid=78663: Thu Jul 25 11:44:46 2024 00:19:48.718 read: IOPS=938, BW=62.3MiB/s (65.3MB/s)(255MiB/4086msec) 00:19:48.718 slat (usec): min=5, max=128, avg= 7.51, stdev= 4.04 00:19:48.718 clat (usec): min=322, max=853, avg=472.83, stdev=51.06 00:19:48.718 lat (usec): min=331, max=859, avg=480.33, stdev=51.77 00:19:48.718 clat percentiles (usec): 00:19:48.718 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 420], 20.00th=[ 445], 00:19:48.718 | 30.00th=[ 449], 40.00th=[ 453], 50.00th=[ 461], 60.00th=[ 469], 00:19:48.718 | 70.00th=[ 490], 80.00th=[ 519], 90.00th=[ 545], 95.00th=[ 562], 00:19:48.718 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 701], 99.95th=[ 832], 00:19:48.718 | 99.99th=[ 857] 00:19:48.718 write: IOPS=944, BW=62.7MiB/s (65.8MB/s)(256MiB/4081msec); 0 zone resets 00:19:48.718 slat (nsec): min=20120, max=98826, avg=24074.85, stdev=4772.89 00:19:48.718 clat (usec): min=367, max=927, avg=544.29, stdev=63.08 00:19:48.718 lat (usec): min=389, max=954, avg=568.37, stdev=63.47 00:19:48.718 clat percentiles (usec): 00:19:48.718 | 1.00th=[ 424], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 486], 00:19:48.718 | 30.00th=[ 506], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:19:48.718 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 644], 00:19:48.718 | 99.00th=[ 775], 99.50th=[ 824], 99.90th=[ 881], 99.95th=[ 922], 00:19:48.718 | 99.99th=[ 930] 00:19:48.718 bw ( KiB/s): min=63376, max=65824, per=100.00%, avg=64294.00, stdev=918.81, samples=8 00:19:48.718 iops : min= 932, max= 968, avg=945.50, stdev=13.51, samples=8 00:19:48.718 lat (usec) : 500=50.79%, 750=48.50%, 1000=0.72% 00:19:48.718 cpu : usr=98.56%, sys=0.44%, ctx=9, majf=0, minf=1171 00:19:48.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.718 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.718 00:19:48.718 Run status group 0 (all jobs): 00:19:48.718 READ: bw=62.3MiB/s (65.3MB/s), 62.3MiB/s-62.3MiB/s (65.3MB/s-65.3MB/s), io=255MiB (267MB), run=4086-4086msec 00:19:48.718 WRITE: bw=62.7MiB/s (65.8MB/s), 62.7MiB/s-62.7MiB/s (65.8MB/s-65.8MB/s), io=256MiB (269MB), run=4081-4081msec 00:19:49.655 ----------------------------------------------------- 00:19:49.655 Suppressions used: 00:19:49.655 count bytes template 00:19:49.655 1 5 /usr/src/fio/parse.c 00:19:49.655 1 8 libtcmalloc_minimal.so 00:19:49.655 1 904 libcrypto.so 00:19:49.655 ----------------------------------------------------- 00:19:49.655 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:49.655 11:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:49.914 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:49.914 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:49.914 fio-3.35 00:19:49.914 Starting 2 threads 00:20:21.979 00:20:21.979 first_half: (groupid=0, jobs=1): err= 0: pid=78772: Thu Jul 25 11:45:18 2024 00:20:21.979 read: IOPS=2348, BW=9395KiB/s (9621kB/s)(256MiB/27876msec) 00:20:21.979 slat (nsec): min=4447, max=35495, avg=7622.32, stdev=1743.64 00:20:21.979 clat (usec): min=723, max=314352, avg=45970.06, stdev=28742.08 00:20:21.979 lat (usec): min=728, max=314361, avg=45977.68, stdev=28742.29 00:20:21.979 clat percentiles (msec): 00:20:21.979 | 1.00th=[ 11], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 39], 00:20:21.979 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:20:21.979 | 70.00th=[ 40], 80.00th=[ 46], 90.00th=[ 48], 95.00th=[ 89], 00:20:21.979 | 99.00th=[ 194], 99.50th=[ 213], 99.90th=[ 249], 99.95th=[ 279], 00:20:21.980 | 99.99th=[ 305] 00:20:21.980 write: IOPS=2354, BW=9418KiB/s (9644kB/s)(256MiB/27834msec); 0 zone resets 00:20:21.980 slat (usec): min=5, max=192, avg= 8.75, stdev= 4.32 00:20:21.980 clat (usec): min=452, max=53601, avg=8480.36, stdev=8376.33 00:20:21.980 lat (usec): min=458, max=53611, avg=8489.12, stdev=8376.47 00:20:21.980 clat percentiles (usec): 00:20:21.980 | 1.00th=[ 1188], 5.00th=[ 1549], 10.00th=[ 1876], 20.00th=[ 3621], 00:20:21.980 | 30.00th=[ 4752], 40.00th=[ 5866], 50.00th=[ 6521], 60.00th=[ 7373], 00:20:21.980 | 70.00th=[ 8160], 80.00th=[ 9765], 90.00th=[15664], 95.00th=[23725], 00:20:21.980 | 99.00th=[46924], 99.50th=[49546], 99.90th=[51643], 99.95th=[52167], 00:20:21.980 | 99.99th=[52691] 00:20:21.980 bw ( KiB/s): min= 40, max=41544, per=100.00%, avg=22636.87, stdev=12926.77, samples=23 00:20:21.980 iops : min= 10, max=10386, avg=5659.22, stdev=3231.69, samples=23 00:20:21.980 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.14% 00:20:21.980 lat (msec) : 2=5.57%, 4=5.82%, 10=28.83%, 20=8.38%, 50=46.51% 00:20:21.980 lat (msec) : 100=2.44%, 250=2.22%, 500=0.05% 00:20:21.980 cpu : usr=99.20%, sys=0.14%, ctx=40, majf=0, minf=5532 00:20:21.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:21.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.980 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.980 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.980 second_half: (groupid=0, jobs=1): err= 0: pid=78773: Thu Jul 25 11:45:18 2024 00:20:21.980 read: IOPS=2368, BW=9475KiB/s (9703kB/s)(256MiB/27646msec) 00:20:21.980 slat (nsec): min=4546, max=41006, avg=7641.22, stdev=1705.26 00:20:21.980 clat (msec): min=11, max=306, avg=46.66, stdev=26.78 00:20:21.980 lat (msec): min=11, max=306, avg=46.67, stdev=26.78 00:20:21.980 clat percentiles (msec): 00:20:21.980 | 1.00th=[ 36], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 39], 00:20:21.980 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:20:21.980 | 70.00th=[ 41], 80.00th=[ 47], 90.00th=[ 53], 95.00th=[ 83], 00:20:21.980 | 99.00th=[ 192], 99.50th=[ 211], 99.90th=[ 251], 99.95th=[ 275], 00:20:21.980 | 99.99th=[ 300] 00:20:21.980 write: IOPS=2383, BW=9535KiB/s (9763kB/s)(256MiB/27494msec); 0 zone resets 00:20:21.980 slat (usec): min=5, max=259, avg= 8.57, stdev= 4.85 00:20:21.980 clat (usec): min=522, max=43838, avg=7346.76, stdev=4547.59 00:20:21.980 lat (usec): min=533, max=43845, avg=7355.33, stdev=4547.80 00:20:21.980 clat percentiles (usec): 00:20:21.980 | 1.00th=[ 1319], 5.00th=[ 2114], 10.00th=[ 2999], 20.00th=[ 3982], 00:20:21.980 | 30.00th=[ 5014], 40.00th=[ 5800], 50.00th=[ 6456], 60.00th=[ 7242], 00:20:21.980 | 70.00th=[ 7767], 80.00th=[ 9110], 90.00th=[13829], 95.00th=[16057], 00:20:21.980 | 99.00th=[22152], 99.50th=[31327], 99.90th=[39060], 99.95th=[41681], 00:20:21.980 | 99.99th=[42730] 00:20:21.980 bw ( KiB/s): min= 1952, max=45672, per=100.00%, avg=23831.27, stdev=13664.55, samples=22 00:20:21.980 iops : min= 488, max=11418, avg=5957.82, stdev=3416.14, samples=22 00:20:21.980 lat (usec) : 750=0.05%, 1000=0.14% 00:20:21.980 lat (msec) : 2=1.99%, 4=7.93%, 10=31.13%, 20=8.18%, 50=45.15% 00:20:21.980 lat (msec) : 100=3.29%, 250=2.11%, 500=0.05% 00:20:21.980 cpu : usr=99.24%, sys=0.15%, ctx=59, majf=0, minf=5583 00:20:21.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:21.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.980 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.980 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.980 00:20:21.980 Run status group 0 (all jobs): 00:20:21.980 READ: bw=18.4MiB/s (19.2MB/s), 9395KiB/s-9475KiB/s (9621kB/s-9703kB/s), io=512MiB (536MB), run=27646-27876msec 00:20:21.980 WRITE: bw=18.4MiB/s (19.3MB/s), 9418KiB/s-9535KiB/s (9644kB/s-9763kB/s), io=512MiB (537MB), run=27494-27834msec 00:20:21.980 ----------------------------------------------------- 00:20:21.980 Suppressions used: 00:20:21.980 count bytes template 00:20:21.980 2 10 /usr/src/fio/parse.c 00:20:21.980 2 192 /usr/src/fio/iolog.c 00:20:21.980 1 8 libtcmalloc_minimal.so 00:20:21.980 1 904 libcrypto.so 00:20:21.980 ----------------------------------------------------- 00:20:21.980 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:21.980 11:45:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:21.980 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:21.980 fio-3.35 00:20:21.980 Starting 1 thread 00:20:40.097 00:20:40.097 test: (groupid=0, jobs=1): err= 0: pid=79120: Thu Jul 25 11:45:38 2024 00:20:40.097 read: IOPS=6426, BW=25.1MiB/s (26.3MB/s)(255MiB/10146msec) 00:20:40.097 slat (nsec): min=4724, max=39369, avg=7192.53, stdev=1983.33 00:20:40.097 clat (usec): min=842, max=43899, avg=19906.05, stdev=1237.83 00:20:40.097 lat (usec): min=847, max=43907, avg=19913.24, stdev=1237.86 00:20:40.097 clat percentiles (usec): 00:20:40.097 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19268], 20.00th=[19268], 00:20:40.097 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:20:40.097 | 70.00th=[19792], 80.00th=[20055], 90.00th=[21365], 95.00th=[22414], 00:20:40.097 | 99.00th=[23462], 99.50th=[23725], 99.90th=[32900], 99.95th=[38536], 00:20:40.097 | 99.99th=[42730] 00:20:40.097 write: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(256MiB/5762msec); 0 zone resets 00:20:40.097 slat (usec): min=6, max=706, avg= 9.86, stdev= 5.56 00:20:40.097 clat (usec): min=602, max=63596, avg=11187.63, stdev=14257.13 00:20:40.097 lat (usec): min=613, max=63605, avg=11197.49, stdev=14257.18 00:20:40.097 clat percentiles (usec): 00:20:40.097 | 1.00th=[ 1012], 5.00th=[ 1221], 10.00th=[ 1352], 20.00th=[ 1565], 00:20:40.097 | 30.00th=[ 1778], 40.00th=[ 2311], 50.00th=[ 7308], 60.00th=[ 8291], 00:20:40.097 | 70.00th=[ 9503], 80.00th=[10945], 90.00th=[40109], 95.00th=[44827], 00:20:40.097 | 99.00th=[50070], 99.50th=[55313], 99.90th=[59507], 99.95th=[60556], 00:20:40.097 | 99.99th=[61604] 00:20:40.097 bw ( KiB/s): min=20832, max=63328, per=96.03%, avg=43690.67, stdev=12055.56, samples=12 00:20:40.097 iops : min= 5208, max=15832, avg=10922.67, stdev=3013.89, samples=12 00:20:40.097 lat (usec) : 750=0.01%, 1000=0.43% 00:20:40.097 lat (msec) : 2=17.86%, 4=2.61%, 10=16.16%, 20=44.06%, 50=18.37% 00:20:40.097 lat (msec) : 100=0.51% 00:20:40.097 cpu : usr=98.84%, sys=0.30%, ctx=30, majf=0, minf=5567 00:20:40.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:40.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.097 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:40.097 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:40.097 00:20:40.097 Run status group 0 (all jobs): 00:20:40.097 READ: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=255MiB (267MB), run=10146-10146msec 00:20:40.097 WRITE: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=256MiB (268MB), run=5762-5762msec 00:20:41.031 ----------------------------------------------------- 00:20:41.031 Suppressions used: 00:20:41.031 count bytes template 00:20:41.031 1 5 /usr/src/fio/parse.c 00:20:41.031 2 192 /usr/src/fio/iolog.c 00:20:41.031 1 8 libtcmalloc_minimal.so 00:20:41.031 1 904 libcrypto.so 00:20:41.031 ----------------------------------------------------- 00:20:41.031 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:41.031 Remove shared memory files 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62321 /dev/shm/spdk_tgt_trace.pid77376 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:41.031 11:45:39 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:41.032 00:20:41.032 real 1m14.444s 00:20:41.032 user 2m42.271s 00:20:41.032 sys 0m4.193s 00:20:41.032 11:45:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:41.032 11:45:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:41.032 ************************************ 00:20:41.032 END TEST ftl_fio_basic 00:20:41.032 ************************************ 00:20:41.032 11:45:39 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:41.032 11:45:39 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:41.032 11:45:39 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:41.032 11:45:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:41.032 ************************************ 00:20:41.032 START TEST ftl_bdevperf 00:20:41.032 ************************************ 00:20:41.032 11:45:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:41.032 * Looking for test storage... 00:20:41.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=79374 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 79374 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 79374 ']' 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:41.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:41.032 11:45:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:41.290 [2024-07-25 11:45:40.161334] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:41.290 [2024-07-25 11:45:40.161510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79374 ] 00:20:41.290 [2024-07-25 11:45:40.332692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.547 [2024-07-25 11:45:40.569279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.114 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:42.114 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:20:42.114 11:45:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:42.114 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:42.114 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:42.114 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:42.114 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:42.114 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:42.682 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:42.682 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:42.682 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:42.682 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:42.682 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:42.682 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:42.682 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:42.682 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:42.682 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:42.682 { 00:20:42.682 "name": "nvme0n1", 00:20:42.682 "aliases": [ 00:20:42.682 "0865addf-d0b4-4182-bfc0-64ea02ae31a2" 00:20:42.682 ], 00:20:42.682 "product_name": "NVMe disk", 00:20:42.682 "block_size": 4096, 00:20:42.682 "num_blocks": 1310720, 00:20:42.682 "uuid": "0865addf-d0b4-4182-bfc0-64ea02ae31a2", 00:20:42.682 "assigned_rate_limits": { 00:20:42.682 "rw_ios_per_sec": 0, 00:20:42.682 "rw_mbytes_per_sec": 0, 00:20:42.682 "r_mbytes_per_sec": 0, 00:20:42.682 "w_mbytes_per_sec": 0 00:20:42.682 }, 00:20:42.682 "claimed": true, 00:20:42.682 "claim_type": "read_many_write_one", 00:20:42.682 "zoned": false, 00:20:42.682 "supported_io_types": { 00:20:42.682 "read": true, 00:20:42.682 "write": true, 00:20:42.682 "unmap": true, 00:20:42.682 "flush": true, 00:20:42.682 "reset": true, 00:20:42.682 "nvme_admin": true, 00:20:42.682 "nvme_io": true, 00:20:42.682 "nvme_io_md": false, 00:20:42.682 "write_zeroes": true, 00:20:42.682 "zcopy": false, 00:20:42.682 "get_zone_info": false, 00:20:42.682 "zone_management": false, 00:20:42.682 "zone_append": false, 00:20:42.683 "compare": true, 00:20:42.683 "compare_and_write": false, 00:20:42.683 "abort": true, 00:20:42.683 "seek_hole": false, 00:20:42.683 "seek_data": false, 00:20:42.683 "copy": true, 00:20:42.683 "nvme_iov_md": false 00:20:42.683 }, 00:20:42.683 "driver_specific": { 00:20:42.683 "nvme": [ 00:20:42.683 { 00:20:42.683 "pci_address": "0000:00:11.0", 00:20:42.683 "trid": { 00:20:42.683 "trtype": "PCIe", 00:20:42.683 "traddr": "0000:00:11.0" 00:20:42.683 }, 00:20:42.683 "ctrlr_data": { 00:20:42.683 "cntlid": 0, 00:20:42.683 "vendor_id": "0x1b36", 00:20:42.683 "model_number": "QEMU NVMe Ctrl", 00:20:42.683 "serial_number": "12341", 00:20:42.683 "firmware_revision": "8.0.0", 00:20:42.683 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:42.683 "oacs": { 00:20:42.683 "security": 0, 00:20:42.683 "format": 1, 00:20:42.683 "firmware": 0, 00:20:42.683 "ns_manage": 1 00:20:42.683 }, 00:20:42.683 "multi_ctrlr": false, 00:20:42.683 "ana_reporting": false 00:20:42.683 }, 00:20:42.683 "vs": { 00:20:42.683 "nvme_version": "1.4" 00:20:42.683 }, 00:20:42.683 "ns_data": { 00:20:42.683 "id": 1, 00:20:42.683 "can_share": false 00:20:42.683 } 00:20:42.683 } 00:20:42.683 ], 00:20:42.683 "mp_policy": "active_passive" 00:20:42.683 } 00:20:42.683 } 00:20:42.683 ]' 00:20:42.683 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:42.941 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:42.941 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:42.941 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:42.941 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:42.941 11:45:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:20:42.941 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:42.941 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:42.941 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:42.941 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:42.941 11:45:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:43.200 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=3316d9d8-f4c5-4678-926a-0d160f3d2b3a 00:20:43.200 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:43.200 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3316d9d8-f4c5-4678-926a-0d160f3d2b3a 00:20:43.459 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:43.719 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=bf4b22ee-af72-42cc-8162-5f3c5619ec41 00:20:43.719 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bf4b22ee-af72-42cc-8162-5f3c5619ec41 00:20:43.977 11:45:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:43.978 11:45:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:44.236 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:44.236 { 00:20:44.236 "name": "4ba993bb-3fc5-4c9c-b3e9-5a821ac937de", 00:20:44.236 "aliases": [ 00:20:44.236 "lvs/nvme0n1p0" 00:20:44.236 ], 00:20:44.236 "product_name": "Logical Volume", 00:20:44.236 "block_size": 4096, 00:20:44.236 "num_blocks": 26476544, 00:20:44.236 "uuid": "4ba993bb-3fc5-4c9c-b3e9-5a821ac937de", 00:20:44.236 "assigned_rate_limits": { 00:20:44.236 "rw_ios_per_sec": 0, 00:20:44.236 "rw_mbytes_per_sec": 0, 00:20:44.236 "r_mbytes_per_sec": 0, 00:20:44.236 "w_mbytes_per_sec": 0 00:20:44.236 }, 00:20:44.236 "claimed": false, 00:20:44.236 "zoned": false, 00:20:44.236 "supported_io_types": { 00:20:44.236 "read": true, 00:20:44.236 "write": true, 00:20:44.236 "unmap": true, 00:20:44.236 "flush": false, 00:20:44.236 "reset": true, 00:20:44.236 "nvme_admin": false, 00:20:44.236 "nvme_io": false, 00:20:44.236 "nvme_io_md": false, 00:20:44.236 "write_zeroes": true, 00:20:44.236 "zcopy": false, 00:20:44.236 "get_zone_info": false, 00:20:44.236 "zone_management": false, 00:20:44.236 "zone_append": false, 00:20:44.236 "compare": false, 00:20:44.236 "compare_and_write": false, 00:20:44.236 "abort": false, 00:20:44.236 "seek_hole": true, 00:20:44.236 "seek_data": true, 00:20:44.236 "copy": false, 00:20:44.236 "nvme_iov_md": false 00:20:44.236 }, 00:20:44.236 "driver_specific": { 00:20:44.236 "lvol": { 00:20:44.236 "lvol_store_uuid": "bf4b22ee-af72-42cc-8162-5f3c5619ec41", 00:20:44.236 "base_bdev": "nvme0n1", 00:20:44.236 "thin_provision": true, 00:20:44.236 "num_allocated_clusters": 0, 00:20:44.236 "snapshot": false, 00:20:44.236 "clone": false, 00:20:44.236 "esnap_clone": false 00:20:44.236 } 00:20:44.236 } 00:20:44.236 } 00:20:44.236 ]' 00:20:44.236 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:44.236 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:44.236 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:44.236 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:44.236 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:44.236 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:20:44.236 11:45:43 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:44.236 11:45:43 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:44.236 11:45:43 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:44.506 11:45:43 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:44.506 11:45:43 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:44.506 11:45:43 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:44.506 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:44.506 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:44.506 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:44.506 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:44.506 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:44.768 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:44.768 { 00:20:44.768 "name": "4ba993bb-3fc5-4c9c-b3e9-5a821ac937de", 00:20:44.768 "aliases": [ 00:20:44.768 "lvs/nvme0n1p0" 00:20:44.768 ], 00:20:44.768 "product_name": "Logical Volume", 00:20:44.768 "block_size": 4096, 00:20:44.768 "num_blocks": 26476544, 00:20:44.768 "uuid": "4ba993bb-3fc5-4c9c-b3e9-5a821ac937de", 00:20:44.768 "assigned_rate_limits": { 00:20:44.768 "rw_ios_per_sec": 0, 00:20:44.768 "rw_mbytes_per_sec": 0, 00:20:44.768 "r_mbytes_per_sec": 0, 00:20:44.768 "w_mbytes_per_sec": 0 00:20:44.768 }, 00:20:44.768 "claimed": false, 00:20:44.768 "zoned": false, 00:20:44.768 "supported_io_types": { 00:20:44.768 "read": true, 00:20:44.768 "write": true, 00:20:44.768 "unmap": true, 00:20:44.768 "flush": false, 00:20:44.768 "reset": true, 00:20:44.768 "nvme_admin": false, 00:20:44.768 "nvme_io": false, 00:20:44.768 "nvme_io_md": false, 00:20:44.768 "write_zeroes": true, 00:20:44.768 "zcopy": false, 00:20:44.768 "get_zone_info": false, 00:20:44.768 "zone_management": false, 00:20:44.768 "zone_append": false, 00:20:44.768 "compare": false, 00:20:44.768 "compare_and_write": false, 00:20:44.768 "abort": false, 00:20:44.768 "seek_hole": true, 00:20:44.768 "seek_data": true, 00:20:44.768 "copy": false, 00:20:44.768 "nvme_iov_md": false 00:20:44.768 }, 00:20:44.768 "driver_specific": { 00:20:44.768 "lvol": { 00:20:44.768 "lvol_store_uuid": "bf4b22ee-af72-42cc-8162-5f3c5619ec41", 00:20:44.768 "base_bdev": "nvme0n1", 00:20:44.768 "thin_provision": true, 00:20:44.768 "num_allocated_clusters": 0, 00:20:44.768 "snapshot": false, 00:20:44.768 "clone": false, 00:20:44.768 "esnap_clone": false 00:20:44.768 } 00:20:44.768 } 00:20:44.768 } 00:20:44.768 ]' 00:20:44.768 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:44.768 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:44.768 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:45.027 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:45.027 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:45.027 11:45:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:20:45.027 11:45:43 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:45.027 11:45:43 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:45.284 11:45:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:20:45.284 11:45:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:45.285 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:45.285 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:45.285 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:45.285 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:45.285 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ba993bb-3fc5-4c9c-b3e9-5a821ac937de 00:20:45.586 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:45.586 { 00:20:45.586 "name": "4ba993bb-3fc5-4c9c-b3e9-5a821ac937de", 00:20:45.586 "aliases": [ 00:20:45.587 "lvs/nvme0n1p0" 00:20:45.587 ], 00:20:45.587 "product_name": "Logical Volume", 00:20:45.587 "block_size": 4096, 00:20:45.587 "num_blocks": 26476544, 00:20:45.587 "uuid": "4ba993bb-3fc5-4c9c-b3e9-5a821ac937de", 00:20:45.587 "assigned_rate_limits": { 00:20:45.587 "rw_ios_per_sec": 0, 00:20:45.587 "rw_mbytes_per_sec": 0, 00:20:45.587 "r_mbytes_per_sec": 0, 00:20:45.587 "w_mbytes_per_sec": 0 00:20:45.587 }, 00:20:45.587 "claimed": false, 00:20:45.587 "zoned": false, 00:20:45.587 "supported_io_types": { 00:20:45.587 "read": true, 00:20:45.587 "write": true, 00:20:45.587 "unmap": true, 00:20:45.587 "flush": false, 00:20:45.587 "reset": true, 00:20:45.587 "nvme_admin": false, 00:20:45.587 "nvme_io": false, 00:20:45.587 "nvme_io_md": false, 00:20:45.587 "write_zeroes": true, 00:20:45.587 "zcopy": false, 00:20:45.587 "get_zone_info": false, 00:20:45.587 "zone_management": false, 00:20:45.587 "zone_append": false, 00:20:45.587 "compare": false, 00:20:45.587 "compare_and_write": false, 00:20:45.587 "abort": false, 00:20:45.587 "seek_hole": true, 00:20:45.587 "seek_data": true, 00:20:45.587 "copy": false, 00:20:45.587 "nvme_iov_md": false 00:20:45.587 }, 00:20:45.587 "driver_specific": { 00:20:45.587 "lvol": { 00:20:45.587 "lvol_store_uuid": "bf4b22ee-af72-42cc-8162-5f3c5619ec41", 00:20:45.587 "base_bdev": "nvme0n1", 00:20:45.587 "thin_provision": true, 00:20:45.587 "num_allocated_clusters": 0, 00:20:45.587 "snapshot": false, 00:20:45.587 "clone": false, 00:20:45.587 "esnap_clone": false 00:20:45.587 } 00:20:45.587 } 00:20:45.587 } 00:20:45.587 ]' 00:20:45.587 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:45.587 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:45.587 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:45.587 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:45.587 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:45.587 11:45:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:20:45.587 11:45:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:20:45.587 11:45:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4ba993bb-3fc5-4c9c-b3e9-5a821ac937de -c nvc0n1p0 --l2p_dram_limit 20 00:20:45.900 [2024-07-25 11:45:44.713095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.713161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:45.900 [2024-07-25 11:45:44.713191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:45.900 [2024-07-25 11:45:44.713204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.713305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.713324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.900 [2024-07-25 11:45:44.713344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:45.900 [2024-07-25 11:45:44.713390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.713424] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:45.900 [2024-07-25 11:45:44.714517] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:45.900 [2024-07-25 11:45:44.714574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.714588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.900 [2024-07-25 11:45:44.714604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:20:45.900 [2024-07-25 11:45:44.714616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.714789] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f138daae-5243-489d-a6c4-0673c193aa89 00:20:45.900 [2024-07-25 11:45:44.716750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.716838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:45.900 [2024-07-25 11:45:44.716857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:45.900 [2024-07-25 11:45:44.716872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.727438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.727520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.900 [2024-07-25 11:45:44.727538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.480 ms 00:20:45.900 [2024-07-25 11:45:44.727553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.727692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.727718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.900 [2024-07-25 11:45:44.727766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:45.900 [2024-07-25 11:45:44.727785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.727879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.727913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:45.900 [2024-07-25 11:45:44.727947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:45.900 [2024-07-25 11:45:44.727963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.727998] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:45.900 [2024-07-25 11:45:44.733285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.733342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.900 [2024-07-25 11:45:44.733379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.294 ms 00:20:45.900 [2024-07-25 11:45:44.733391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.733444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.733460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:45.900 [2024-07-25 11:45:44.733476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:45.900 [2024-07-25 11:45:44.733487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.733566] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:45.900 [2024-07-25 11:45:44.733754] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:45.900 [2024-07-25 11:45:44.733791] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:45.900 [2024-07-25 11:45:44.733809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:45.900 [2024-07-25 11:45:44.733827] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:45.900 [2024-07-25 11:45:44.733841] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:45.900 [2024-07-25 11:45:44.733857] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:45.900 [2024-07-25 11:45:44.733870] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:45.900 [2024-07-25 11:45:44.733886] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:45.900 [2024-07-25 11:45:44.733897] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:45.900 [2024-07-25 11:45:44.733912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.733938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:45.900 [2024-07-25 11:45:44.733959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:45.900 [2024-07-25 11:45:44.733980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.734074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.900 [2024-07-25 11:45:44.734090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:45.900 [2024-07-25 11:45:44.734106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:45.900 [2024-07-25 11:45:44.734117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.900 [2024-07-25 11:45:44.734226] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:45.900 [2024-07-25 11:45:44.734252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:45.900 [2024-07-25 11:45:44.734269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.900 [2024-07-25 11:45:44.734285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.900 [2024-07-25 11:45:44.734300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:45.900 [2024-07-25 11:45:44.734311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:45.900 [2024-07-25 11:45:44.734324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:45.900 [2024-07-25 11:45:44.734336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:45.900 [2024-07-25 11:45:44.734349] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:45.900 [2024-07-25 11:45:44.734359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.900 [2024-07-25 11:45:44.734372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:45.900 [2024-07-25 11:45:44.734383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:45.900 [2024-07-25 11:45:44.734396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.900 [2024-07-25 11:45:44.734406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:45.900 [2024-07-25 11:45:44.734422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:45.900 [2024-07-25 11:45:44.734433] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.900 [2024-07-25 11:45:44.734449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:45.900 [2024-07-25 11:45:44.734460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:45.900 [2024-07-25 11:45:44.734490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.900 [2024-07-25 11:45:44.734504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:45.900 [2024-07-25 11:45:44.734522] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:45.900 [2024-07-25 11:45:44.734533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.900 [2024-07-25 11:45:44.734552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:45.900 [2024-07-25 11:45:44.734563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:45.900 [2024-07-25 11:45:44.734577] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.900 [2024-07-25 11:45:44.734588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:45.900 [2024-07-25 11:45:44.734602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:45.900 [2024-07-25 11:45:44.734613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.900 [2024-07-25 11:45:44.734626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:45.900 [2024-07-25 11:45:44.734637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:45.900 [2024-07-25 11:45:44.734650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.900 [2024-07-25 11:45:44.734661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:45.900 [2024-07-25 11:45:44.734678] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:45.900 [2024-07-25 11:45:44.734689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.900 [2024-07-25 11:45:44.734703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:45.900 [2024-07-25 11:45:44.734719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:45.900 [2024-07-25 11:45:44.734733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.900 [2024-07-25 11:45:44.734744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:45.900 [2024-07-25 11:45:44.734760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:45.900 [2024-07-25 11:45:44.734771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.901 [2024-07-25 11:45:44.734785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:45.901 [2024-07-25 11:45:44.734796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:45.901 [2024-07-25 11:45:44.734809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.901 [2024-07-25 11:45:44.734820] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:45.901 [2024-07-25 11:45:44.734835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:45.901 [2024-07-25 11:45:44.734847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.901 [2024-07-25 11:45:44.734861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.901 [2024-07-25 11:45:44.734874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:45.901 [2024-07-25 11:45:44.734890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:45.901 [2024-07-25 11:45:44.734901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:45.901 [2024-07-25 11:45:44.734915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:45.901 [2024-07-25 11:45:44.734950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:45.901 [2024-07-25 11:45:44.734967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:45.901 [2024-07-25 11:45:44.734985] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:45.901 [2024-07-25 11:45:44.735003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.901 [2024-07-25 11:45:44.735017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:45.901 [2024-07-25 11:45:44.735032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:45.901 [2024-07-25 11:45:44.735043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:45.901 [2024-07-25 11:45:44.735058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:45.901 [2024-07-25 11:45:44.735070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:45.901 [2024-07-25 11:45:44.735084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:45.901 [2024-07-25 11:45:44.735096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:45.901 [2024-07-25 11:45:44.735111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:45.901 [2024-07-25 11:45:44.735122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:45.901 [2024-07-25 11:45:44.735141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:45.901 [2024-07-25 11:45:44.735154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:45.901 [2024-07-25 11:45:44.735173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:45.901 [2024-07-25 11:45:44.735185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:45.901 [2024-07-25 11:45:44.735199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:45.901 [2024-07-25 11:45:44.735211] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:45.901 [2024-07-25 11:45:44.735227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.901 [2024-07-25 11:45:44.735241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:45.901 [2024-07-25 11:45:44.735256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:45.901 [2024-07-25 11:45:44.735268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:45.901 [2024-07-25 11:45:44.735282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:45.901 [2024-07-25 11:45:44.735295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.901 [2024-07-25 11:45:44.735316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:45.901 [2024-07-25 11:45:44.735328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:20:45.901 [2024-07-25 11:45:44.735343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.901 [2024-07-25 11:45:44.735396] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:45.901 [2024-07-25 11:45:44.735420] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:49.186 [2024-07-25 11:45:47.611226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.186 [2024-07-25 11:45:47.611314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:49.186 [2024-07-25 11:45:47.611344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2875.838 ms 00:20:49.186 [2024-07-25 11:45:47.611372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.186 [2024-07-25 11:45:47.658674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.186 [2024-07-25 11:45:47.658745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.186 [2024-07-25 11:45:47.658769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.951 ms 00:20:49.186 [2024-07-25 11:45:47.658786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.186 [2024-07-25 11:45:47.659022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.186 [2024-07-25 11:45:47.659052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:49.186 [2024-07-25 11:45:47.659068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:49.186 [2024-07-25 11:45:47.659087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.186 [2024-07-25 11:45:47.703208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.186 [2024-07-25 11:45:47.703285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.186 [2024-07-25 11:45:47.703307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.059 ms 00:20:49.186 [2024-07-25 11:45:47.703323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.186 [2024-07-25 11:45:47.703395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.186 [2024-07-25 11:45:47.703417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.186 [2024-07-25 11:45:47.703432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:49.186 [2024-07-25 11:45:47.703447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:47.704131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:47.704166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.187 [2024-07-25 11:45:47.704182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:20:49.187 [2024-07-25 11:45:47.704198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:47.704392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:47.704416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.187 [2024-07-25 11:45:47.704434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:20:49.187 [2024-07-25 11:45:47.704452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:47.723074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:47.723131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.187 [2024-07-25 11:45:47.723148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.595 ms 00:20:49.187 [2024-07-25 11:45:47.723164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:47.737904] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:20:49.187 [2024-07-25 11:45:47.745602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:47.745652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:49.187 [2024-07-25 11:45:47.745676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.308 ms 00:20:49.187 [2024-07-25 11:45:47.745690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:47.817993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:47.818102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:49.187 [2024-07-25 11:45:47.818130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.237 ms 00:20:49.187 [2024-07-25 11:45:47.818145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:47.818418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:47.818439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:49.187 [2024-07-25 11:45:47.818460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:20:49.187 [2024-07-25 11:45:47.818473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:47.849530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:47.849590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:49.187 [2024-07-25 11:45:47.849612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.979 ms 00:20:49.187 [2024-07-25 11:45:47.849625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:47.879559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:47.879602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:49.187 [2024-07-25 11:45:47.879624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.884 ms 00:20:49.187 [2024-07-25 11:45:47.879636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:47.880513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:47.880549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:49.187 [2024-07-25 11:45:47.880568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:20:49.187 [2024-07-25 11:45:47.880581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:47.971335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:47.971412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:49.187 [2024-07-25 11:45:47.971443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.680 ms 00:20:49.187 [2024-07-25 11:45:47.971457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:48.003982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:48.004030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:49.187 [2024-07-25 11:45:48.004052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.464 ms 00:20:49.187 [2024-07-25 11:45:48.004070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:48.034956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:48.035021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:49.187 [2024-07-25 11:45:48.035045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.824 ms 00:20:49.187 [2024-07-25 11:45:48.035058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:48.066337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:48.066381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:49.187 [2024-07-25 11:45:48.066402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.228 ms 00:20:49.187 [2024-07-25 11:45:48.066415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:48.066473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:48.066493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:49.187 [2024-07-25 11:45:48.066514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:49.187 [2024-07-25 11:45:48.066526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:48.066657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.187 [2024-07-25 11:45:48.066677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:49.187 [2024-07-25 11:45:48.066694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:49.187 [2024-07-25 11:45:48.066710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.187 [2024-07-25 11:45:48.068106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3354.421 ms, result 0 00:20:49.187 { 00:20:49.187 "name": "ftl0", 00:20:49.187 "uuid": "f138daae-5243-489d-a6c4-0673c193aa89" 00:20:49.187 } 00:20:49.187 11:45:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:49.187 11:45:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:20:49.187 11:45:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:20:49.446 11:45:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:49.704 [2024-07-25 11:45:48.541061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:49.704 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:49.704 Zero copy mechanism will not be used. 00:20:49.704 Running I/O for 4 seconds... 00:20:53.954 00:20:53.954 Latency(us) 00:20:53.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.954 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:20:53.954 ftl0 : 4.00 1813.93 120.46 0.00 0.00 578.64 242.04 960.70 00:20:53.954 =================================================================================================================== 00:20:53.954 Total : 1813.93 120.46 0.00 0.00 578.64 242.04 960.70 00:20:53.954 [2024-07-25 11:45:52.552517] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:53.954 0 00:20:53.954 11:45:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:20:53.954 [2024-07-25 11:45:52.672302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:53.954 Running I/O for 4 seconds... 00:20:58.206 00:20:58.206 Latency(us) 00:20:58.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.206 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:20:58.206 ftl0 : 4.02 7357.04 28.74 0.00 0.00 17353.58 348.16 69587.32 00:20:58.206 =================================================================================================================== 00:20:58.206 Total : 7357.04 28.74 0.00 0.00 17353.58 0.00 69587.32 00:20:58.206 [2024-07-25 11:45:56.701757] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:58.206 0 00:20:58.206 11:45:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:20:58.206 [2024-07-25 11:45:56.862188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:58.206 Running I/O for 4 seconds... 00:21:02.393 00:21:02.393 Latency(us) 00:21:02.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.393 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:02.393 Verification LBA range: start 0x0 length 0x1400000 00:21:02.393 ftl0 : 4.01 5949.20 23.24 0.00 0.00 21439.67 357.47 28001.75 00:21:02.393 =================================================================================================================== 00:21:02.393 Total : 5949.20 23.24 0.00 0.00 21439.67 0.00 28001.75 00:21:02.393 [2024-07-25 11:46:00.894387] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:02.393 0 00:21:02.393 11:46:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:02.393 [2024-07-25 11:46:01.168094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.393 [2024-07-25 11:46:01.168159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:02.393 [2024-07-25 11:46:01.168202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:02.393 [2024-07-25 11:46:01.168219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.393 [2024-07-25 11:46:01.168259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:02.393 [2024-07-25 11:46:01.171970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.393 [2024-07-25 11:46:01.172007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:02.393 [2024-07-25 11:46:01.172040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.656 ms 00:21:02.393 [2024-07-25 11:46:01.172057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.393 [2024-07-25 11:46:01.173735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.393 [2024-07-25 11:46:01.173834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:02.393 [2024-07-25 11:46:01.173852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:21:02.393 [2024-07-25 11:46:01.173868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.393 [2024-07-25 11:46:01.351186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.393 [2024-07-25 11:46:01.351294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:02.393 [2024-07-25 11:46:01.351318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 177.293 ms 00:21:02.393 [2024-07-25 11:46:01.351337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.393 [2024-07-25 11:46:01.357604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.393 [2024-07-25 11:46:01.357665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:02.393 [2024-07-25 11:46:01.357682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.218 ms 00:21:02.393 [2024-07-25 11:46:01.357696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.393 [2024-07-25 11:46:01.386992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.393 [2024-07-25 11:46:01.387270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:02.393 [2024-07-25 11:46:01.387400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.200 ms 00:21:02.393 [2024-07-25 11:46:01.387430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.393 [2024-07-25 11:46:01.406661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.393 [2024-07-25 11:46:01.406732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:02.393 [2024-07-25 11:46:01.406756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.179 ms 00:21:02.393 [2024-07-25 11:46:01.406773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.393 [2024-07-25 11:46:01.406969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.393 [2024-07-25 11:46:01.406997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:02.393 [2024-07-25 11:46:01.407012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:21:02.393 [2024-07-25 11:46:01.407030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.393 [2024-07-25 11:46:01.438258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.393 [2024-07-25 11:46:01.438322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:02.393 [2024-07-25 11:46:01.438358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.205 ms 00:21:02.393 [2024-07-25 11:46:01.438374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.652 [2024-07-25 11:46:01.468713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.652 [2024-07-25 11:46:01.468759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:02.652 [2024-07-25 11:46:01.468793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.293 ms 00:21:02.652 [2024-07-25 11:46:01.468807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.652 [2024-07-25 11:46:01.497520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.652 [2024-07-25 11:46:01.497566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:02.652 [2024-07-25 11:46:01.497599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.669 ms 00:21:02.652 [2024-07-25 11:46:01.497613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.652 [2024-07-25 11:46:01.526484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.652 [2024-07-25 11:46:01.526530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:02.652 [2024-07-25 11:46:01.526564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.745 ms 00:21:02.652 [2024-07-25 11:46:01.526582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.652 [2024-07-25 11:46:01.526625] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:02.652 [2024-07-25 11:46:01.526655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:02.652 [2024-07-25 11:46:01.526807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.526821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.526833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.526851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.526863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.526878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.526890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.526906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.526939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.526976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.526988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.527999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:02.653 [2024-07-25 11:46:01.528159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:02.654 [2024-07-25 11:46:01.528184] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:02.654 [2024-07-25 11:46:01.528201] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f138daae-5243-489d-a6c4-0673c193aa89 00:21:02.654 [2024-07-25 11:46:01.528218] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:02.654 [2024-07-25 11:46:01.528230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:02.654 [2024-07-25 11:46:01.528244] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:02.654 [2024-07-25 11:46:01.528260] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:02.654 [2024-07-25 11:46:01.528285] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:02.654 [2024-07-25 11:46:01.528299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:02.654 [2024-07-25 11:46:01.528313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:02.654 [2024-07-25 11:46:01.528324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:02.654 [2024-07-25 11:46:01.528340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:02.654 [2024-07-25 11:46:01.528352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.654 [2024-07-25 11:46:01.528367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:02.654 [2024-07-25 11:46:01.528381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.729 ms 00:21:02.654 [2024-07-25 11:46:01.528395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.654 [2024-07-25 11:46:01.545015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.654 [2024-07-25 11:46:01.545081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:02.654 [2024-07-25 11:46:01.545099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.537 ms 00:21:02.654 [2024-07-25 11:46:01.545115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.654 [2024-07-25 11:46:01.545584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.654 [2024-07-25 11:46:01.545620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:02.654 [2024-07-25 11:46:01.545637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:21:02.654 [2024-07-25 11:46:01.545652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.654 [2024-07-25 11:46:01.585444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.654 [2024-07-25 11:46:01.585493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.654 [2024-07-25 11:46:01.585527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.654 [2024-07-25 11:46:01.585545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.654 [2024-07-25 11:46:01.585619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.654 [2024-07-25 11:46:01.585640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.654 [2024-07-25 11:46:01.585653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.654 [2024-07-25 11:46:01.585668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.654 [2024-07-25 11:46:01.585780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.654 [2024-07-25 11:46:01.585811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.654 [2024-07-25 11:46:01.585825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.654 [2024-07-25 11:46:01.585839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.654 [2024-07-25 11:46:01.585864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.654 [2024-07-25 11:46:01.585883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.654 [2024-07-25 11:46:01.585895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.654 [2024-07-25 11:46:01.585909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.654 [2024-07-25 11:46:01.688840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.654 [2024-07-25 11:46:01.688978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.654 [2024-07-25 11:46:01.689002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.654 [2024-07-25 11:46:01.689021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.913 [2024-07-25 11:46:01.773759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.913 [2024-07-25 11:46:01.773865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.913 [2024-07-25 11:46:01.773886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.913 [2024-07-25 11:46:01.773902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.913 [2024-07-25 11:46:01.774103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.913 [2024-07-25 11:46:01.774131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.913 [2024-07-25 11:46:01.774152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.913 [2024-07-25 11:46:01.774168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.913 [2024-07-25 11:46:01.774237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.913 [2024-07-25 11:46:01.774262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.913 [2024-07-25 11:46:01.774276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.913 [2024-07-25 11:46:01.774291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.913 [2024-07-25 11:46:01.774433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.913 [2024-07-25 11:46:01.774459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.913 [2024-07-25 11:46:01.774472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.913 [2024-07-25 11:46:01.774496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.913 [2024-07-25 11:46:01.774548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.913 [2024-07-25 11:46:01.774571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:02.913 [2024-07-25 11:46:01.774585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.913 [2024-07-25 11:46:01.774599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.913 [2024-07-25 11:46:01.774654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.913 [2024-07-25 11:46:01.774675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.913 [2024-07-25 11:46:01.774687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.913 [2024-07-25 11:46:01.774702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.913 [2024-07-25 11:46:01.774770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.913 [2024-07-25 11:46:01.774792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.913 [2024-07-25 11:46:01.774805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.913 [2024-07-25 11:46:01.774820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.913 [2024-07-25 11:46:01.775027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 606.858 ms, result 0 00:21:02.913 true 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 79374 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 79374 ']' 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 79374 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79374 00:21:02.913 killing process with pid 79374 00:21:02.913 Received shutdown signal, test time was about 4.000000 seconds 00:21:02.913 00:21:02.913 Latency(us) 00:21:02.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.913 =================================================================================================================== 00:21:02.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79374' 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 79374 00:21:02.913 11:46:01 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 79374 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:07.099 Remove shared memory files 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:07.099 ************************************ 00:21:07.099 END TEST ftl_bdevperf 00:21:07.099 ************************************ 00:21:07.099 00:21:07.099 real 0m25.756s 00:21:07.099 user 0m29.115s 00:21:07.099 sys 0m1.337s 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:07.099 11:46:05 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:07.099 11:46:05 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:07.099 11:46:05 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:07.099 11:46:05 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:07.099 11:46:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:07.099 ************************************ 00:21:07.099 START TEST ftl_trim 00:21:07.099 ************************************ 00:21:07.100 11:46:05 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:07.100 * Looking for test storage... 00:21:07.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79735 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:07.100 11:46:05 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79735 00:21:07.100 11:46:05 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79735 ']' 00:21:07.100 11:46:05 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.100 11:46:05 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.100 11:46:05 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.100 11:46:05 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.100 11:46:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:07.100 [2024-07-25 11:46:05.977121] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:07.100 [2024-07-25 11:46:05.977480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79735 ] 00:21:07.100 [2024-07-25 11:46:06.145586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:07.358 [2024-07-25 11:46:06.391149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.358 [2024-07-25 11:46:06.391235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.358 [2024-07-25 11:46:06.391239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.296 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:08.296 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:21:08.296 11:46:07 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:08.296 11:46:07 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:08.296 11:46:07 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:08.296 11:46:07 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:08.296 11:46:07 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:08.296 11:46:07 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:08.555 11:46:07 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:08.555 11:46:07 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:08.555 11:46:07 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:08.555 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:08.555 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:08.555 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:08.555 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:08.555 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:08.814 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:08.814 { 00:21:08.814 "name": "nvme0n1", 00:21:08.814 "aliases": [ 00:21:08.814 "fc473291-04bd-4ce2-8e20-1b42f2023863" 00:21:08.814 ], 00:21:08.814 "product_name": "NVMe disk", 00:21:08.814 "block_size": 4096, 00:21:08.814 "num_blocks": 1310720, 00:21:08.814 "uuid": "fc473291-04bd-4ce2-8e20-1b42f2023863", 00:21:08.814 "assigned_rate_limits": { 00:21:08.814 "rw_ios_per_sec": 0, 00:21:08.814 "rw_mbytes_per_sec": 0, 00:21:08.814 "r_mbytes_per_sec": 0, 00:21:08.814 "w_mbytes_per_sec": 0 00:21:08.814 }, 00:21:08.814 "claimed": true, 00:21:08.814 "claim_type": "read_many_write_one", 00:21:08.814 "zoned": false, 00:21:08.815 "supported_io_types": { 00:21:08.815 "read": true, 00:21:08.815 "write": true, 00:21:08.815 "unmap": true, 00:21:08.815 "flush": true, 00:21:08.815 "reset": true, 00:21:08.815 "nvme_admin": true, 00:21:08.815 "nvme_io": true, 00:21:08.815 "nvme_io_md": false, 00:21:08.815 "write_zeroes": true, 00:21:08.815 "zcopy": false, 00:21:08.815 "get_zone_info": false, 00:21:08.815 "zone_management": false, 00:21:08.815 "zone_append": false, 00:21:08.815 "compare": true, 00:21:08.815 "compare_and_write": false, 00:21:08.815 "abort": true, 00:21:08.815 "seek_hole": false, 00:21:08.815 "seek_data": false, 00:21:08.815 "copy": true, 00:21:08.815 "nvme_iov_md": false 00:21:08.815 }, 00:21:08.815 "driver_specific": { 00:21:08.815 "nvme": [ 00:21:08.815 { 00:21:08.815 "pci_address": "0000:00:11.0", 00:21:08.815 "trid": { 00:21:08.815 "trtype": "PCIe", 00:21:08.815 "traddr": "0000:00:11.0" 00:21:08.815 }, 00:21:08.815 "ctrlr_data": { 00:21:08.815 "cntlid": 0, 00:21:08.815 "vendor_id": "0x1b36", 00:21:08.815 "model_number": "QEMU NVMe Ctrl", 00:21:08.815 "serial_number": "12341", 00:21:08.815 "firmware_revision": "8.0.0", 00:21:08.815 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:08.815 "oacs": { 00:21:08.815 "security": 0, 00:21:08.815 "format": 1, 00:21:08.815 "firmware": 0, 00:21:08.815 "ns_manage": 1 00:21:08.815 }, 00:21:08.815 "multi_ctrlr": false, 00:21:08.815 "ana_reporting": false 00:21:08.815 }, 00:21:08.815 "vs": { 00:21:08.815 "nvme_version": "1.4" 00:21:08.815 }, 00:21:08.815 "ns_data": { 00:21:08.815 "id": 1, 00:21:08.815 "can_share": false 00:21:08.815 } 00:21:08.815 } 00:21:08.815 ], 00:21:08.815 "mp_policy": "active_passive" 00:21:08.815 } 00:21:08.815 } 00:21:08.815 ]' 00:21:08.815 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:09.074 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:09.074 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:09.074 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:09.074 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:09.074 11:46:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:21:09.074 11:46:07 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:09.074 11:46:07 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:09.074 11:46:07 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:09.074 11:46:07 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:09.074 11:46:07 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:09.332 11:46:08 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=bf4b22ee-af72-42cc-8162-5f3c5619ec41 00:21:09.332 11:46:08 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:09.332 11:46:08 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf4b22ee-af72-42cc-8162-5f3c5619ec41 00:21:09.590 11:46:08 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:09.849 11:46:08 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=ddd29850-7db0-41db-8623-cdc7e6e15206 00:21:09.849 11:46:08 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ddd29850-7db0-41db-8623-cdc7e6e15206 00:21:10.107 11:46:08 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:10.107 11:46:08 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:10.107 11:46:08 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:10.107 11:46:08 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:10.107 11:46:08 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:10.107 11:46:08 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:10.107 11:46:08 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:10.107 11:46:08 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:10.107 11:46:08 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:10.107 11:46:08 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:10.107 11:46:08 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:10.107 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:10.366 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:10.366 { 00:21:10.366 "name": "7f15b4e5-557e-47e1-bb23-d100594abb4f", 00:21:10.366 "aliases": [ 00:21:10.366 "lvs/nvme0n1p0" 00:21:10.366 ], 00:21:10.366 "product_name": "Logical Volume", 00:21:10.366 "block_size": 4096, 00:21:10.366 "num_blocks": 26476544, 00:21:10.366 "uuid": "7f15b4e5-557e-47e1-bb23-d100594abb4f", 00:21:10.366 "assigned_rate_limits": { 00:21:10.366 "rw_ios_per_sec": 0, 00:21:10.366 "rw_mbytes_per_sec": 0, 00:21:10.366 "r_mbytes_per_sec": 0, 00:21:10.366 "w_mbytes_per_sec": 0 00:21:10.366 }, 00:21:10.366 "claimed": false, 00:21:10.366 "zoned": false, 00:21:10.366 "supported_io_types": { 00:21:10.366 "read": true, 00:21:10.366 "write": true, 00:21:10.366 "unmap": true, 00:21:10.366 "flush": false, 00:21:10.366 "reset": true, 00:21:10.366 "nvme_admin": false, 00:21:10.366 "nvme_io": false, 00:21:10.366 "nvme_io_md": false, 00:21:10.366 "write_zeroes": true, 00:21:10.366 "zcopy": false, 00:21:10.366 "get_zone_info": false, 00:21:10.366 "zone_management": false, 00:21:10.366 "zone_append": false, 00:21:10.366 "compare": false, 00:21:10.366 "compare_and_write": false, 00:21:10.366 "abort": false, 00:21:10.366 "seek_hole": true, 00:21:10.366 "seek_data": true, 00:21:10.366 "copy": false, 00:21:10.366 "nvme_iov_md": false 00:21:10.366 }, 00:21:10.366 "driver_specific": { 00:21:10.366 "lvol": { 00:21:10.366 "lvol_store_uuid": "ddd29850-7db0-41db-8623-cdc7e6e15206", 00:21:10.366 "base_bdev": "nvme0n1", 00:21:10.366 "thin_provision": true, 00:21:10.366 "num_allocated_clusters": 0, 00:21:10.366 "snapshot": false, 00:21:10.366 "clone": false, 00:21:10.366 "esnap_clone": false 00:21:10.366 } 00:21:10.366 } 00:21:10.366 } 00:21:10.366 ]' 00:21:10.366 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:10.366 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:10.366 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:10.366 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:10.366 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:10.366 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:10.366 11:46:09 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:10.366 11:46:09 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:10.366 11:46:09 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:10.933 11:46:09 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:10.933 11:46:09 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:10.933 11:46:09 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:10.933 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:10.934 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:10.934 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:10.934 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:10.934 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:10.934 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:10.934 { 00:21:10.934 "name": "7f15b4e5-557e-47e1-bb23-d100594abb4f", 00:21:10.934 "aliases": [ 00:21:10.934 "lvs/nvme0n1p0" 00:21:10.934 ], 00:21:10.934 "product_name": "Logical Volume", 00:21:10.934 "block_size": 4096, 00:21:10.934 "num_blocks": 26476544, 00:21:10.934 "uuid": "7f15b4e5-557e-47e1-bb23-d100594abb4f", 00:21:10.934 "assigned_rate_limits": { 00:21:10.934 "rw_ios_per_sec": 0, 00:21:10.934 "rw_mbytes_per_sec": 0, 00:21:10.934 "r_mbytes_per_sec": 0, 00:21:10.934 "w_mbytes_per_sec": 0 00:21:10.934 }, 00:21:10.934 "claimed": false, 00:21:10.934 "zoned": false, 00:21:10.934 "supported_io_types": { 00:21:10.934 "read": true, 00:21:10.934 "write": true, 00:21:10.934 "unmap": true, 00:21:10.934 "flush": false, 00:21:10.934 "reset": true, 00:21:10.934 "nvme_admin": false, 00:21:10.934 "nvme_io": false, 00:21:10.934 "nvme_io_md": false, 00:21:10.934 "write_zeroes": true, 00:21:10.934 "zcopy": false, 00:21:10.934 "get_zone_info": false, 00:21:10.934 "zone_management": false, 00:21:10.934 "zone_append": false, 00:21:10.934 "compare": false, 00:21:10.934 "compare_and_write": false, 00:21:10.934 "abort": false, 00:21:10.934 "seek_hole": true, 00:21:10.934 "seek_data": true, 00:21:10.934 "copy": false, 00:21:10.934 "nvme_iov_md": false 00:21:10.934 }, 00:21:10.934 "driver_specific": { 00:21:10.934 "lvol": { 00:21:10.934 "lvol_store_uuid": "ddd29850-7db0-41db-8623-cdc7e6e15206", 00:21:10.934 "base_bdev": "nvme0n1", 00:21:10.934 "thin_provision": true, 00:21:10.934 "num_allocated_clusters": 0, 00:21:10.934 "snapshot": false, 00:21:10.934 "clone": false, 00:21:10.934 "esnap_clone": false 00:21:10.934 } 00:21:10.934 } 00:21:10.934 } 00:21:10.934 ]' 00:21:10.934 11:46:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:11.192 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:11.192 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:11.192 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:11.192 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:11.192 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:11.192 11:46:10 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:11.192 11:46:10 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:11.450 11:46:10 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:11.450 11:46:10 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:11.450 11:46:10 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:11.450 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:11.450 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:11.450 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:11.450 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:11.450 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f15b4e5-557e-47e1-bb23-d100594abb4f 00:21:11.709 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:11.709 { 00:21:11.709 "name": "7f15b4e5-557e-47e1-bb23-d100594abb4f", 00:21:11.709 "aliases": [ 00:21:11.709 "lvs/nvme0n1p0" 00:21:11.709 ], 00:21:11.709 "product_name": "Logical Volume", 00:21:11.709 "block_size": 4096, 00:21:11.709 "num_blocks": 26476544, 00:21:11.709 "uuid": "7f15b4e5-557e-47e1-bb23-d100594abb4f", 00:21:11.709 "assigned_rate_limits": { 00:21:11.709 "rw_ios_per_sec": 0, 00:21:11.709 "rw_mbytes_per_sec": 0, 00:21:11.709 "r_mbytes_per_sec": 0, 00:21:11.709 "w_mbytes_per_sec": 0 00:21:11.709 }, 00:21:11.709 "claimed": false, 00:21:11.709 "zoned": false, 00:21:11.709 "supported_io_types": { 00:21:11.709 "read": true, 00:21:11.709 "write": true, 00:21:11.709 "unmap": true, 00:21:11.709 "flush": false, 00:21:11.709 "reset": true, 00:21:11.709 "nvme_admin": false, 00:21:11.709 "nvme_io": false, 00:21:11.709 "nvme_io_md": false, 00:21:11.709 "write_zeroes": true, 00:21:11.709 "zcopy": false, 00:21:11.709 "get_zone_info": false, 00:21:11.709 "zone_management": false, 00:21:11.709 "zone_append": false, 00:21:11.709 "compare": false, 00:21:11.709 "compare_and_write": false, 00:21:11.709 "abort": false, 00:21:11.709 "seek_hole": true, 00:21:11.709 "seek_data": true, 00:21:11.709 "copy": false, 00:21:11.709 "nvme_iov_md": false 00:21:11.709 }, 00:21:11.709 "driver_specific": { 00:21:11.709 "lvol": { 00:21:11.709 "lvol_store_uuid": "ddd29850-7db0-41db-8623-cdc7e6e15206", 00:21:11.709 "base_bdev": "nvme0n1", 00:21:11.709 "thin_provision": true, 00:21:11.709 "num_allocated_clusters": 0, 00:21:11.709 "snapshot": false, 00:21:11.709 "clone": false, 00:21:11.709 "esnap_clone": false 00:21:11.709 } 00:21:11.709 } 00:21:11.709 } 00:21:11.709 ]' 00:21:11.709 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:11.709 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:11.709 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:11.709 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:11.709 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:11.709 11:46:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:11.709 11:46:10 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:11.709 11:46:10 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7f15b4e5-557e-47e1-bb23-d100594abb4f -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:11.968 [2024-07-25 11:46:10.932162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.968 [2024-07-25 11:46:10.932236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:11.968 [2024-07-25 11:46:10.932261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:11.968 [2024-07-25 11:46:10.932290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.968 [2024-07-25 11:46:10.936088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.968 [2024-07-25 11:46:10.936139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.968 [2024-07-25 11:46:10.936158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.761 ms 00:21:11.968 [2024-07-25 11:46:10.936174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.968 [2024-07-25 11:46:10.936343] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:11.968 [2024-07-25 11:46:10.937342] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:11.968 [2024-07-25 11:46:10.937385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.968 [2024-07-25 11:46:10.937407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.968 [2024-07-25 11:46:10.937422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:21:11.968 [2024-07-25 11:46:10.937437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.968 [2024-07-25 11:46:10.937676] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d38b5acd-953a-4711-aafb-f6576962b114 00:21:11.968 [2024-07-25 11:46:10.939517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.968 [2024-07-25 11:46:10.939560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:11.968 [2024-07-25 11:46:10.939582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:11.968 [2024-07-25 11:46:10.939596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.968 [2024-07-25 11:46:10.949207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.968 [2024-07-25 11:46:10.949260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.968 [2024-07-25 11:46:10.949299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.512 ms 00:21:11.968 [2024-07-25 11:46:10.949313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.968 [2024-07-25 11:46:10.949520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.968 [2024-07-25 11:46:10.949544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.968 [2024-07-25 11:46:10.949562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:21:11.968 [2024-07-25 11:46:10.949575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.968 [2024-07-25 11:46:10.949651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.968 [2024-07-25 11:46:10.949672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:11.968 [2024-07-25 11:46:10.949689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:11.968 [2024-07-25 11:46:10.949707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.969 [2024-07-25 11:46:10.949764] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:11.969 [2024-07-25 11:46:10.955047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.969 [2024-07-25 11:46:10.955090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.969 [2024-07-25 11:46:10.955108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.296 ms 00:21:11.969 [2024-07-25 11:46:10.955123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.969 [2024-07-25 11:46:10.955208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.969 [2024-07-25 11:46:10.955231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:11.969 [2024-07-25 11:46:10.955245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:11.969 [2024-07-25 11:46:10.955260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.969 [2024-07-25 11:46:10.955300] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:11.969 [2024-07-25 11:46:10.955479] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:11.969 [2024-07-25 11:46:10.955499] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:11.969 [2024-07-25 11:46:10.955521] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:11.969 [2024-07-25 11:46:10.955538] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:11.969 [2024-07-25 11:46:10.955555] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:11.969 [2024-07-25 11:46:10.955574] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:11.969 [2024-07-25 11:46:10.955590] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:11.969 [2024-07-25 11:46:10.955602] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:11.969 [2024-07-25 11:46:10.955640] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:11.969 [2024-07-25 11:46:10.955655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.969 [2024-07-25 11:46:10.955670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:11.969 [2024-07-25 11:46:10.955683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:21:11.969 [2024-07-25 11:46:10.955698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.969 [2024-07-25 11:46:10.955808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.969 [2024-07-25 11:46:10.955827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:11.969 [2024-07-25 11:46:10.955840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:11.969 [2024-07-25 11:46:10.955858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.969 [2024-07-25 11:46:10.956019] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:11.969 [2024-07-25 11:46:10.956061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:11.969 [2024-07-25 11:46:10.956076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.969 [2024-07-25 11:46:10.956092] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:11.969 [2024-07-25 11:46:10.956120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:11.969 [2024-07-25 11:46:10.956145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:11.969 [2024-07-25 11:46:10.956156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.969 [2024-07-25 11:46:10.956181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:11.969 [2024-07-25 11:46:10.956196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:11.969 [2024-07-25 11:46:10.956208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.969 [2024-07-25 11:46:10.956221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:11.969 [2024-07-25 11:46:10.956233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:11.969 [2024-07-25 11:46:10.956247] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:11.969 [2024-07-25 11:46:10.956283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:11.969 [2024-07-25 11:46:10.956295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:11.969 [2024-07-25 11:46:10.956322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.969 [2024-07-25 11:46:10.956350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:11.969 [2024-07-25 11:46:10.956364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.969 [2024-07-25 11:46:10.956390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:11.969 [2024-07-25 11:46:10.956401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.969 [2024-07-25 11:46:10.956427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:11.969 [2024-07-25 11:46:10.956441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.969 [2024-07-25 11:46:10.956466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:11.969 [2024-07-25 11:46:10.956478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.969 [2024-07-25 11:46:10.956505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:11.969 [2024-07-25 11:46:10.956520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:11.969 [2024-07-25 11:46:10.956531] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.969 [2024-07-25 11:46:10.956547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:11.969 [2024-07-25 11:46:10.956558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:11.969 [2024-07-25 11:46:10.956571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:11.969 [2024-07-25 11:46:10.956602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:11.969 [2024-07-25 11:46:10.956613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956626] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:11.969 [2024-07-25 11:46:10.956639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:11.969 [2024-07-25 11:46:10.956655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.969 [2024-07-25 11:46:10.956667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.969 [2024-07-25 11:46:10.956685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:11.969 [2024-07-25 11:46:10.956698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:11.969 [2024-07-25 11:46:10.956714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:11.969 [2024-07-25 11:46:10.956726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:11.969 [2024-07-25 11:46:10.956740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:11.969 [2024-07-25 11:46:10.956752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:11.969 [2024-07-25 11:46:10.956773] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:11.969 [2024-07-25 11:46:10.956789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.969 [2024-07-25 11:46:10.956806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:11.969 [2024-07-25 11:46:10.956819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:11.969 [2024-07-25 11:46:10.956834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:11.969 [2024-07-25 11:46:10.956846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:11.969 [2024-07-25 11:46:10.956861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:11.969 [2024-07-25 11:46:10.956873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:11.969 [2024-07-25 11:46:10.956890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:11.969 [2024-07-25 11:46:10.956903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:11.969 [2024-07-25 11:46:10.956928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:11.969 [2024-07-25 11:46:10.956943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:11.969 [2024-07-25 11:46:10.956961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:11.969 [2024-07-25 11:46:10.956974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:11.969 [2024-07-25 11:46:10.956989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:11.969 [2024-07-25 11:46:10.957002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:11.969 [2024-07-25 11:46:10.957017] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:11.969 [2024-07-25 11:46:10.957030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.970 [2024-07-25 11:46:10.957047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:11.970 [2024-07-25 11:46:10.957060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:11.970 [2024-07-25 11:46:10.957074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:11.970 [2024-07-25 11:46:10.957087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:11.970 [2024-07-25 11:46:10.957103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.970 [2024-07-25 11:46:10.957116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:11.970 [2024-07-25 11:46:10.957131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.146 ms 00:21:11.970 [2024-07-25 11:46:10.957144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.970 [2024-07-25 11:46:10.957253] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:11.970 [2024-07-25 11:46:10.957277] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:14.510 [2024-07-25 11:46:13.533179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.510 [2024-07-25 11:46:13.533503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:14.510 [2024-07-25 11:46:13.533657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2575.922 ms 00:21:14.510 [2024-07-25 11:46:13.533788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.770 [2024-07-25 11:46:13.574039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.770 [2024-07-25 11:46:13.574391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:14.770 [2024-07-25 11:46:13.574437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.843 ms 00:21:14.770 [2024-07-25 11:46:13.574453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.770 [2024-07-25 11:46:13.574708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.770 [2024-07-25 11:46:13.574738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:14.770 [2024-07-25 11:46:13.574760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:14.770 [2024-07-25 11:46:13.574773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.770 [2024-07-25 11:46:13.630680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.770 [2024-07-25 11:46:13.630797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:14.770 [2024-07-25 11:46:13.630826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.857 ms 00:21:14.770 [2024-07-25 11:46:13.630840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.770 [2024-07-25 11:46:13.631065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.770 [2024-07-25 11:46:13.631088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:14.770 [2024-07-25 11:46:13.631112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:14.770 [2024-07-25 11:46:13.631127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.770 [2024-07-25 11:46:13.631732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.770 [2024-07-25 11:46:13.631769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:14.770 [2024-07-25 11:46:13.631789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:21:14.770 [2024-07-25 11:46:13.631802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.770 [2024-07-25 11:46:13.631999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.770 [2024-07-25 11:46:13.632017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:14.770 [2024-07-25 11:46:13.632039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:21:14.770 [2024-07-25 11:46:13.632051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.770 [2024-07-25 11:46:13.656443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.770 [2024-07-25 11:46:13.656499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:14.770 [2024-07-25 11:46:13.656523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.328 ms 00:21:14.770 [2024-07-25 11:46:13.656537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.770 [2024-07-25 11:46:13.671341] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:14.770 [2024-07-25 11:46:13.693571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.770 [2024-07-25 11:46:13.693673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:14.770 [2024-07-25 11:46:13.693697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.848 ms 00:21:14.770 [2024-07-25 11:46:13.693714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.770 [2024-07-25 11:46:13.770067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.770 [2024-07-25 11:46:13.770153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:14.770 [2024-07-25 11:46:13.770178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.184 ms 00:21:14.770 [2024-07-25 11:46:13.770196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.770 [2024-07-25 11:46:13.770509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.771 [2024-07-25 11:46:13.770537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:14.771 [2024-07-25 11:46:13.770552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:21:14.771 [2024-07-25 11:46:13.770571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.771 [2024-07-25 11:46:13.801737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.771 [2024-07-25 11:46:13.801787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:14.771 [2024-07-25 11:46:13.801807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.119 ms 00:21:14.771 [2024-07-25 11:46:13.801823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.030 [2024-07-25 11:46:13.832370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.030 [2024-07-25 11:46:13.832420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:15.030 [2024-07-25 11:46:13.832440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.417 ms 00:21:15.030 [2024-07-25 11:46:13.832455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.030 [2024-07-25 11:46:13.833388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.030 [2024-07-25 11:46:13.833429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:15.030 [2024-07-25 11:46:13.833446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:21:15.030 [2024-07-25 11:46:13.833462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.030 [2024-07-25 11:46:13.923492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.030 [2024-07-25 11:46:13.923581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:15.030 [2024-07-25 11:46:13.923606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.984 ms 00:21:15.030 [2024-07-25 11:46:13.923628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.030 [2024-07-25 11:46:13.956003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.030 [2024-07-25 11:46:13.956058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:15.030 [2024-07-25 11:46:13.956084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.260 ms 00:21:15.030 [2024-07-25 11:46:13.956100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.030 [2024-07-25 11:46:13.986724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.030 [2024-07-25 11:46:13.986772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:15.030 [2024-07-25 11:46:13.986791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.513 ms 00:21:15.030 [2024-07-25 11:46:13.986806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.030 [2024-07-25 11:46:14.017729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.030 [2024-07-25 11:46:14.017791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:15.030 [2024-07-25 11:46:14.017811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.822 ms 00:21:15.030 [2024-07-25 11:46:14.017827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.030 [2024-07-25 11:46:14.017970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.030 [2024-07-25 11:46:14.017998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:15.030 [2024-07-25 11:46:14.018014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:15.030 [2024-07-25 11:46:14.018033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.030 [2024-07-25 11:46:14.018144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.030 [2024-07-25 11:46:14.018165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:15.030 [2024-07-25 11:46:14.018179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:15.030 [2024-07-25 11:46:14.018218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.030 [2024-07-25 11:46:14.019484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:15.030 [2024-07-25 11:46:14.023662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3086.945 ms, result 0 00:21:15.030 [2024-07-25 11:46:14.024650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:15.030 { 00:21:15.030 "name": "ftl0", 00:21:15.030 "uuid": "d38b5acd-953a-4711-aafb-f6576962b114" 00:21:15.030 } 00:21:15.030 11:46:14 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:15.030 11:46:14 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:21:15.030 11:46:14 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:15.030 11:46:14 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:21:15.030 11:46:14 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:15.030 11:46:14 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:15.030 11:46:14 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:15.288 11:46:14 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:15.546 [ 00:21:15.546 { 00:21:15.547 "name": "ftl0", 00:21:15.547 "aliases": [ 00:21:15.547 "d38b5acd-953a-4711-aafb-f6576962b114" 00:21:15.547 ], 00:21:15.547 "product_name": "FTL disk", 00:21:15.547 "block_size": 4096, 00:21:15.547 "num_blocks": 23592960, 00:21:15.547 "uuid": "d38b5acd-953a-4711-aafb-f6576962b114", 00:21:15.547 "assigned_rate_limits": { 00:21:15.547 "rw_ios_per_sec": 0, 00:21:15.547 "rw_mbytes_per_sec": 0, 00:21:15.547 "r_mbytes_per_sec": 0, 00:21:15.547 "w_mbytes_per_sec": 0 00:21:15.547 }, 00:21:15.547 "claimed": false, 00:21:15.547 "zoned": false, 00:21:15.547 "supported_io_types": { 00:21:15.547 "read": true, 00:21:15.547 "write": true, 00:21:15.547 "unmap": true, 00:21:15.547 "flush": true, 00:21:15.547 "reset": false, 00:21:15.547 "nvme_admin": false, 00:21:15.547 "nvme_io": false, 00:21:15.547 "nvme_io_md": false, 00:21:15.547 "write_zeroes": true, 00:21:15.547 "zcopy": false, 00:21:15.547 "get_zone_info": false, 00:21:15.547 "zone_management": false, 00:21:15.547 "zone_append": false, 00:21:15.547 "compare": false, 00:21:15.547 "compare_and_write": false, 00:21:15.547 "abort": false, 00:21:15.547 "seek_hole": false, 00:21:15.547 "seek_data": false, 00:21:15.547 "copy": false, 00:21:15.547 "nvme_iov_md": false 00:21:15.547 }, 00:21:15.547 "driver_specific": { 00:21:15.547 "ftl": { 00:21:15.547 "base_bdev": "7f15b4e5-557e-47e1-bb23-d100594abb4f", 00:21:15.547 "cache": "nvc0n1p0" 00:21:15.547 } 00:21:15.547 } 00:21:15.547 } 00:21:15.547 ] 00:21:15.806 11:46:14 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:21:15.806 11:46:14 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:15.806 11:46:14 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:15.806 11:46:14 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:15.806 11:46:14 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:16.372 11:46:15 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:16.372 { 00:21:16.372 "name": "ftl0", 00:21:16.372 "aliases": [ 00:21:16.372 "d38b5acd-953a-4711-aafb-f6576962b114" 00:21:16.372 ], 00:21:16.372 "product_name": "FTL disk", 00:21:16.372 "block_size": 4096, 00:21:16.372 "num_blocks": 23592960, 00:21:16.372 "uuid": "d38b5acd-953a-4711-aafb-f6576962b114", 00:21:16.372 "assigned_rate_limits": { 00:21:16.372 "rw_ios_per_sec": 0, 00:21:16.372 "rw_mbytes_per_sec": 0, 00:21:16.372 "r_mbytes_per_sec": 0, 00:21:16.372 "w_mbytes_per_sec": 0 00:21:16.372 }, 00:21:16.372 "claimed": false, 00:21:16.372 "zoned": false, 00:21:16.372 "supported_io_types": { 00:21:16.372 "read": true, 00:21:16.372 "write": true, 00:21:16.372 "unmap": true, 00:21:16.372 "flush": true, 00:21:16.372 "reset": false, 00:21:16.372 "nvme_admin": false, 00:21:16.372 "nvme_io": false, 00:21:16.372 "nvme_io_md": false, 00:21:16.372 "write_zeroes": true, 00:21:16.372 "zcopy": false, 00:21:16.372 "get_zone_info": false, 00:21:16.372 "zone_management": false, 00:21:16.372 "zone_append": false, 00:21:16.372 "compare": false, 00:21:16.372 "compare_and_write": false, 00:21:16.372 "abort": false, 00:21:16.372 "seek_hole": false, 00:21:16.372 "seek_data": false, 00:21:16.372 "copy": false, 00:21:16.372 "nvme_iov_md": false 00:21:16.372 }, 00:21:16.372 "driver_specific": { 00:21:16.372 "ftl": { 00:21:16.372 "base_bdev": "7f15b4e5-557e-47e1-bb23-d100594abb4f", 00:21:16.372 "cache": "nvc0n1p0" 00:21:16.372 } 00:21:16.372 } 00:21:16.372 } 00:21:16.372 ]' 00:21:16.372 11:46:15 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:16.372 11:46:15 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:16.372 11:46:15 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:16.373 [2024-07-25 11:46:15.400294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.373 [2024-07-25 11:46:15.400368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:16.373 [2024-07-25 11:46:15.400397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:16.373 [2024-07-25 11:46:15.400411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.373 [2024-07-25 11:46:15.400468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:16.373 [2024-07-25 11:46:15.404107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.373 [2024-07-25 11:46:15.404147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:16.373 [2024-07-25 11:46:15.404165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.614 ms 00:21:16.373 [2024-07-25 11:46:15.404184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.373 [2024-07-25 11:46:15.404778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.373 [2024-07-25 11:46:15.404809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:16.373 [2024-07-25 11:46:15.404824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:21:16.373 [2024-07-25 11:46:15.404846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.373 [2024-07-25 11:46:15.408449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.373 [2024-07-25 11:46:15.408485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:16.373 [2024-07-25 11:46:15.408502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.563 ms 00:21:16.373 [2024-07-25 11:46:15.408517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.373 [2024-07-25 11:46:15.415804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.373 [2024-07-25 11:46:15.415844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:16.373 [2024-07-25 11:46:15.415860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.213 ms 00:21:16.373 [2024-07-25 11:46:15.415876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.634 [2024-07-25 11:46:15.447243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.634 [2024-07-25 11:46:15.447298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:16.634 [2024-07-25 11:46:15.447318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.253 ms 00:21:16.634 [2024-07-25 11:46:15.447338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.634 [2024-07-25 11:46:15.466209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.634 [2024-07-25 11:46:15.466264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:16.634 [2024-07-25 11:46:15.466303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.738 ms 00:21:16.634 [2024-07-25 11:46:15.466318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.634 [2024-07-25 11:46:15.466586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.634 [2024-07-25 11:46:15.466612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:16.634 [2024-07-25 11:46:15.466627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:21:16.634 [2024-07-25 11:46:15.466642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.634 [2024-07-25 11:46:15.497204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.634 [2024-07-25 11:46:15.497253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:16.634 [2024-07-25 11:46:15.497272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.522 ms 00:21:16.634 [2024-07-25 11:46:15.497288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.634 [2024-07-25 11:46:15.527259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.634 [2024-07-25 11:46:15.527307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:16.634 [2024-07-25 11:46:15.527326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.870 ms 00:21:16.634 [2024-07-25 11:46:15.527344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.634 [2024-07-25 11:46:15.557157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.634 [2024-07-25 11:46:15.557205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:16.634 [2024-07-25 11:46:15.557224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.715 ms 00:21:16.634 [2024-07-25 11:46:15.557239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.634 [2024-07-25 11:46:15.587354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.634 [2024-07-25 11:46:15.587421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:16.634 [2024-07-25 11:46:15.587440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.930 ms 00:21:16.634 [2024-07-25 11:46:15.587455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.634 [2024-07-25 11:46:15.587555] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:16.634 [2024-07-25 11:46:15.587588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.587993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:16.634 [2024-07-25 11:46:15.588626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.588995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:16.635 [2024-07-25 11:46:15.589220] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:16.635 [2024-07-25 11:46:15.589234] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d38b5acd-953a-4711-aafb-f6576962b114 00:21:16.635 [2024-07-25 11:46:15.589252] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:16.635 [2024-07-25 11:46:15.589268] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:16.635 [2024-07-25 11:46:15.589282] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:16.635 [2024-07-25 11:46:15.589294] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:16.635 [2024-07-25 11:46:15.589309] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:16.635 [2024-07-25 11:46:15.589321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:16.635 [2024-07-25 11:46:15.589335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:16.635 [2024-07-25 11:46:15.589346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:16.635 [2024-07-25 11:46:15.589361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:16.635 [2024-07-25 11:46:15.589374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.635 [2024-07-25 11:46:15.589389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:16.635 [2024-07-25 11:46:15.589403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.821 ms 00:21:16.635 [2024-07-25 11:46:15.589418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.635 [2024-07-25 11:46:15.606459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.635 [2024-07-25 11:46:15.606504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:16.635 [2024-07-25 11:46:15.606522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.997 ms 00:21:16.635 [2024-07-25 11:46:15.606541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.635 [2024-07-25 11:46:15.607087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.635 [2024-07-25 11:46:15.607116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:16.635 [2024-07-25 11:46:15.607137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:21:16.635 [2024-07-25 11:46:15.607152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.635 [2024-07-25 11:46:15.666779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.635 [2024-07-25 11:46:15.666852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:16.635 [2024-07-25 11:46:15.666874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.635 [2024-07-25 11:46:15.666889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.635 [2024-07-25 11:46:15.667112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.635 [2024-07-25 11:46:15.667138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:16.635 [2024-07-25 11:46:15.667152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.635 [2024-07-25 11:46:15.667168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.635 [2024-07-25 11:46:15.667262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.635 [2024-07-25 11:46:15.667287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:16.635 [2024-07-25 11:46:15.667301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.635 [2024-07-25 11:46:15.667320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.635 [2024-07-25 11:46:15.667362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.635 [2024-07-25 11:46:15.667392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:16.635 [2024-07-25 11:46:15.667406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.635 [2024-07-25 11:46:15.667420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.894 [2024-07-25 11:46:15.777003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.894 [2024-07-25 11:46:15.777071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:16.894 [2024-07-25 11:46:15.777093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.894 [2024-07-25 11:46:15.777110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.894 [2024-07-25 11:46:15.862749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.894 [2024-07-25 11:46:15.862821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:16.894 [2024-07-25 11:46:15.862842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.894 [2024-07-25 11:46:15.862859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.894 [2024-07-25 11:46:15.863026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.894 [2024-07-25 11:46:15.863057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:16.894 [2024-07-25 11:46:15.863072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.894 [2024-07-25 11:46:15.863091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.894 [2024-07-25 11:46:15.863163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.894 [2024-07-25 11:46:15.863181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:16.894 [2024-07-25 11:46:15.863194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.894 [2024-07-25 11:46:15.863209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.894 [2024-07-25 11:46:15.863359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.894 [2024-07-25 11:46:15.863385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:16.894 [2024-07-25 11:46:15.863421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.894 [2024-07-25 11:46:15.863437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.894 [2024-07-25 11:46:15.863514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.894 [2024-07-25 11:46:15.863538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:16.894 [2024-07-25 11:46:15.863552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.894 [2024-07-25 11:46:15.863567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.894 [2024-07-25 11:46:15.863640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.894 [2024-07-25 11:46:15.863660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:16.894 [2024-07-25 11:46:15.863676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.894 [2024-07-25 11:46:15.863693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.894 [2024-07-25 11:46:15.863770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.894 [2024-07-25 11:46:15.863791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:16.894 [2024-07-25 11:46:15.863804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.894 [2024-07-25 11:46:15.863819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.894 [2024-07-25 11:46:15.864094] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 463.798 ms, result 0 00:21:16.895 true 00:21:16.895 11:46:15 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79735 00:21:16.895 11:46:15 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79735 ']' 00:21:16.895 11:46:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79735 00:21:16.895 11:46:15 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:21:16.895 11:46:15 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.895 11:46:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79735 00:21:16.895 killing process with pid 79735 00:21:16.895 11:46:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:16.895 11:46:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:16.895 11:46:15 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79735' 00:21:16.895 11:46:15 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79735 00:21:16.895 11:46:15 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79735 00:21:22.200 11:46:21 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:23.574 65536+0 records in 00:21:23.574 65536+0 records out 00:21:23.574 268435456 bytes (268 MB, 256 MiB) copied, 1.36995 s, 196 MB/s 00:21:23.574 11:46:22 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:23.574 [2024-07-25 11:46:22.501201] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:23.574 [2024-07-25 11:46:22.501416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79940 ] 00:21:23.863 [2024-07-25 11:46:22.689221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.135 [2024-07-25 11:46:22.957773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.397 [2024-07-25 11:46:23.312884] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.397 [2024-07-25 11:46:23.313050] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.658 [2024-07-25 11:46:23.480194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.658 [2024-07-25 11:46:23.480257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:24.658 [2024-07-25 11:46:23.480288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:24.658 [2024-07-25 11:46:23.480302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.658 [2024-07-25 11:46:23.483999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.658 [2024-07-25 11:46:23.484053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.658 [2024-07-25 11:46:23.484071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.666 ms 00:21:24.658 [2024-07-25 11:46:23.484084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.658 [2024-07-25 11:46:23.484217] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:24.658 [2024-07-25 11:46:23.485172] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:24.658 [2024-07-25 11:46:23.485214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.658 [2024-07-25 11:46:23.485230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.658 [2024-07-25 11:46:23.485243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:21:24.658 [2024-07-25 11:46:23.485255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.658 [2024-07-25 11:46:23.487449] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:24.658 [2024-07-25 11:46:23.504638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.658 [2024-07-25 11:46:23.504681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:24.658 [2024-07-25 11:46:23.504728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.190 ms 00:21:24.658 [2024-07-25 11:46:23.504740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.658 [2024-07-25 11:46:23.504874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.659 [2024-07-25 11:46:23.504897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:24.659 [2024-07-25 11:46:23.504911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:24.659 [2024-07-25 11:46:23.504946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.659 [2024-07-25 11:46:23.513827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.659 [2024-07-25 11:46:23.513872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.659 [2024-07-25 11:46:23.513907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.816 ms 00:21:24.659 [2024-07-25 11:46:23.513920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.659 [2024-07-25 11:46:23.514091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.659 [2024-07-25 11:46:23.514115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.659 [2024-07-25 11:46:23.514130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:24.659 [2024-07-25 11:46:23.514142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.659 [2024-07-25 11:46:23.514192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.659 [2024-07-25 11:46:23.514208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:24.659 [2024-07-25 11:46:23.514227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:24.659 [2024-07-25 11:46:23.514238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.659 [2024-07-25 11:46:23.514273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:24.659 [2024-07-25 11:46:23.519374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.659 [2024-07-25 11:46:23.519413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.659 [2024-07-25 11:46:23.519447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.112 ms 00:21:24.659 [2024-07-25 11:46:23.519459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.659 [2024-07-25 11:46:23.519555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.659 [2024-07-25 11:46:23.519576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:24.659 [2024-07-25 11:46:23.519590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:24.659 [2024-07-25 11:46:23.519602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.659 [2024-07-25 11:46:23.519637] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:24.659 [2024-07-25 11:46:23.519674] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:24.659 [2024-07-25 11:46:23.519725] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:24.659 [2024-07-25 11:46:23.519748] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:24.659 [2024-07-25 11:46:23.519856] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:24.659 [2024-07-25 11:46:23.519873] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:24.659 [2024-07-25 11:46:23.519889] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:24.659 [2024-07-25 11:46:23.519905] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:24.659 [2024-07-25 11:46:23.519919] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:24.659 [2024-07-25 11:46:23.519959] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:24.659 [2024-07-25 11:46:23.519974] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:24.659 [2024-07-25 11:46:23.519986] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:24.659 [2024-07-25 11:46:23.519998] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:24.659 [2024-07-25 11:46:23.520012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.659 [2024-07-25 11:46:23.520024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:24.659 [2024-07-25 11:46:23.520037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:21:24.659 [2024-07-25 11:46:23.520048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.659 [2024-07-25 11:46:23.520146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.659 [2024-07-25 11:46:23.520163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:24.659 [2024-07-25 11:46:23.520182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:24.659 [2024-07-25 11:46:23.520193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.659 [2024-07-25 11:46:23.520317] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:24.659 [2024-07-25 11:46:23.520337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:24.659 [2024-07-25 11:46:23.520350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.659 [2024-07-25 11:46:23.520362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:24.659 [2024-07-25 11:46:23.520385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:24.659 [2024-07-25 11:46:23.520407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:24.659 [2024-07-25 11:46:23.520418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.659 [2024-07-25 11:46:23.520439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:24.659 [2024-07-25 11:46:23.520450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:24.659 [2024-07-25 11:46:23.520460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.659 [2024-07-25 11:46:23.520471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:24.659 [2024-07-25 11:46:23.520482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:24.659 [2024-07-25 11:46:23.520492] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:24.659 [2024-07-25 11:46:23.520512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:24.659 [2024-07-25 11:46:23.520537] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:24.659 [2024-07-25 11:46:23.520562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.659 [2024-07-25 11:46:23.520584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:24.659 [2024-07-25 11:46:23.520595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.659 [2024-07-25 11:46:23.520616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:24.659 [2024-07-25 11:46:23.520627] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.659 [2024-07-25 11:46:23.520656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:24.659 [2024-07-25 11:46:23.520667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.659 [2024-07-25 11:46:23.520688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:24.659 [2024-07-25 11:46:23.520699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.659 [2024-07-25 11:46:23.520720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:24.659 [2024-07-25 11:46:23.520731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:24.659 [2024-07-25 11:46:23.520741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.659 [2024-07-25 11:46:23.520752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:24.659 [2024-07-25 11:46:23.520762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:24.659 [2024-07-25 11:46:23.520773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:24.659 [2024-07-25 11:46:23.520794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:24.659 [2024-07-25 11:46:23.520805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520815] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:24.659 [2024-07-25 11:46:23.520827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:24.659 [2024-07-25 11:46:23.520838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.659 [2024-07-25 11:46:23.520850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.659 [2024-07-25 11:46:23.520867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:24.659 [2024-07-25 11:46:23.520879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:24.659 [2024-07-25 11:46:23.520890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:24.659 [2024-07-25 11:46:23.520901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:24.659 [2024-07-25 11:46:23.520912] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:24.659 [2024-07-25 11:46:23.520938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:24.660 [2024-07-25 11:46:23.520953] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:24.660 [2024-07-25 11:46:23.520968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.660 [2024-07-25 11:46:23.520982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:24.660 [2024-07-25 11:46:23.520994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:24.660 [2024-07-25 11:46:23.521005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:24.660 [2024-07-25 11:46:23.521017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:24.660 [2024-07-25 11:46:23.521030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:24.660 [2024-07-25 11:46:23.521042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:24.660 [2024-07-25 11:46:23.521054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:24.660 [2024-07-25 11:46:23.521065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:24.660 [2024-07-25 11:46:23.521077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:24.660 [2024-07-25 11:46:23.521088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:24.660 [2024-07-25 11:46:23.521099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:24.660 [2024-07-25 11:46:23.521111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:24.660 [2024-07-25 11:46:23.521130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:24.660 [2024-07-25 11:46:23.521142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:24.660 [2024-07-25 11:46:23.521153] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:24.660 [2024-07-25 11:46:23.521174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.660 [2024-07-25 11:46:23.521186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:24.660 [2024-07-25 11:46:23.521198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:24.660 [2024-07-25 11:46:23.521210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:24.660 [2024-07-25 11:46:23.521222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:24.660 [2024-07-25 11:46:23.521253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.521266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:24.660 [2024-07-25 11:46:23.521278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:21:24.660 [2024-07-25 11:46:23.521290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.660 [2024-07-25 11:46:23.570781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.570866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.660 [2024-07-25 11:46:23.570890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.406 ms 00:21:24.660 [2024-07-25 11:46:23.570903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.660 [2024-07-25 11:46:23.571216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.571240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:24.660 [2024-07-25 11:46:23.571255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:24.660 [2024-07-25 11:46:23.571268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.660 [2024-07-25 11:46:23.614947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.615030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.660 [2024-07-25 11:46:23.615060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.642 ms 00:21:24.660 [2024-07-25 11:46:23.615073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.660 [2024-07-25 11:46:23.615248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.615270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.660 [2024-07-25 11:46:23.615285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:24.660 [2024-07-25 11:46:23.615298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.660 [2024-07-25 11:46:23.615861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.615881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.660 [2024-07-25 11:46:23.615895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:21:24.660 [2024-07-25 11:46:23.615913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.660 [2024-07-25 11:46:23.616134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.616155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.660 [2024-07-25 11:46:23.616170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:21:24.660 [2024-07-25 11:46:23.616181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.660 [2024-07-25 11:46:23.635346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.635396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.660 [2024-07-25 11:46:23.635415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.123 ms 00:21:24.660 [2024-07-25 11:46:23.635428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.660 [2024-07-25 11:46:23.652648] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:24.660 [2024-07-25 11:46:23.652696] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:24.660 [2024-07-25 11:46:23.652717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.652731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:24.660 [2024-07-25 11:46:23.652745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.092 ms 00:21:24.660 [2024-07-25 11:46:23.652757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.660 [2024-07-25 11:46:23.681988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.682034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:24.660 [2024-07-25 11:46:23.682052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.121 ms 00:21:24.660 [2024-07-25 11:46:23.682064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.660 [2024-07-25 11:46:23.697016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.660 [2024-07-25 11:46:23.697056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:24.660 [2024-07-25 11:46:23.697073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.828 ms 00:21:24.660 [2024-07-25 11:46:23.697084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.918 [2024-07-25 11:46:23.712304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.918 [2024-07-25 11:46:23.712345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:24.918 [2024-07-25 11:46:23.712362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.128 ms 00:21:24.918 [2024-07-25 11:46:23.712374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.918 [2024-07-25 11:46:23.713312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.918 [2024-07-25 11:46:23.713348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.918 [2024-07-25 11:46:23.713365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:21:24.918 [2024-07-25 11:46:23.713377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.918 [2024-07-25 11:46:23.789923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.918 [2024-07-25 11:46:23.790017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:24.918 [2024-07-25 11:46:23.790041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.493 ms 00:21:24.918 [2024-07-25 11:46:23.790053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.918 [2024-07-25 11:46:23.802623] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:24.918 [2024-07-25 11:46:23.823524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.918 [2024-07-25 11:46:23.823595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.918 [2024-07-25 11:46:23.823618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.279 ms 00:21:24.918 [2024-07-25 11:46:23.823631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.918 [2024-07-25 11:46:23.823808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.918 [2024-07-25 11:46:23.823835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:24.919 [2024-07-25 11:46:23.823849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:24.919 [2024-07-25 11:46:23.823861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.919 [2024-07-25 11:46:23.824002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.919 [2024-07-25 11:46:23.824022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.919 [2024-07-25 11:46:23.824035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:24.919 [2024-07-25 11:46:23.824046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.919 [2024-07-25 11:46:23.824101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.919 [2024-07-25 11:46:23.824117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.919 [2024-07-25 11:46:23.824138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:24.919 [2024-07-25 11:46:23.824150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.919 [2024-07-25 11:46:23.824209] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:24.919 [2024-07-25 11:46:23.824227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.919 [2024-07-25 11:46:23.824251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:24.919 [2024-07-25 11:46:23.824265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:24.919 [2024-07-25 11:46:23.824289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.919 [2024-07-25 11:46:23.855529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.919 [2024-07-25 11:46:23.855579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.919 [2024-07-25 11:46:23.855616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.203 ms 00:21:24.919 [2024-07-25 11:46:23.855628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.919 [2024-07-25 11:46:23.855787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.919 [2024-07-25 11:46:23.855809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.919 [2024-07-25 11:46:23.855823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:24.919 [2024-07-25 11:46:23.855835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.919 [2024-07-25 11:46:23.857338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:24.919 [2024-07-25 11:46:23.861522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.685 ms, result 0 00:21:24.919 [2024-07-25 11:46:23.862335] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:24.919 [2024-07-25 11:46:23.878622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.471  Copying: 24/256 [MB] (24 MBps) Copying: 49/256 [MB] (24 MBps) Copying: 74/256 [MB] (24 MBps) Copying: 98/256 [MB] (23 MBps) Copying: 122/256 [MB] (24 MBps) Copying: 147/256 [MB] (24 MBps) Copying: 172/256 [MB] (24 MBps) Copying: 196/256 [MB] (24 MBps) Copying: 220/256 [MB] (23 MBps) Copying: 244/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-25 11:46:34.353366] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:35.471 [2024-07-25 11:46:34.365845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.471 [2024-07-25 11:46:34.366150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:35.471 [2024-07-25 11:46:34.366298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:35.471 [2024-07-25 11:46:34.366353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.471 [2024-07-25 11:46:34.366497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:35.471 [2024-07-25 11:46:34.370267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.471 [2024-07-25 11:46:34.370330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:35.471 [2024-07-25 11:46:34.370348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.609 ms 00:21:35.471 [2024-07-25 11:46:34.370359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.471 [2024-07-25 11:46:34.372302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.471 [2024-07-25 11:46:34.372472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:35.471 [2024-07-25 11:46:34.372598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.913 ms 00:21:35.471 [2024-07-25 11:46:34.372745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.471 [2024-07-25 11:46:34.379825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.471 [2024-07-25 11:46:34.380025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:35.471 [2024-07-25 11:46:34.380171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.010 ms 00:21:35.471 [2024-07-25 11:46:34.380197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.471 [2024-07-25 11:46:34.387227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.471 [2024-07-25 11:46:34.387418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:35.471 [2024-07-25 11:46:34.387531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.977 ms 00:21:35.471 [2024-07-25 11:46:34.387581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.471 [2024-07-25 11:46:34.416390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.471 [2024-07-25 11:46:34.416575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:35.471 [2024-07-25 11:46:34.416695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.694 ms 00:21:35.471 [2024-07-25 11:46:34.416745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.471 [2024-07-25 11:46:34.434112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.471 [2024-07-25 11:46:34.434335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:35.471 [2024-07-25 11:46:34.434456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.270 ms 00:21:35.471 [2024-07-25 11:46:34.434517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.471 [2024-07-25 11:46:34.434716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.471 [2024-07-25 11:46:34.434779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:35.471 [2024-07-25 11:46:34.434820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:35.471 [2024-07-25 11:46:34.434936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.471 [2024-07-25 11:46:34.465123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.471 [2024-07-25 11:46:34.465326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:35.471 [2024-07-25 11:46:34.465443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.057 ms 00:21:35.471 [2024-07-25 11:46:34.465492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.471 [2024-07-25 11:46:34.495517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.471 [2024-07-25 11:46:34.495706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:35.471 [2024-07-25 11:46:34.495825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.927 ms 00:21:35.471 [2024-07-25 11:46:34.495874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.731 [2024-07-25 11:46:34.525312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.731 [2024-07-25 11:46:34.525541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:35.731 [2024-07-25 11:46:34.525659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.328 ms 00:21:35.731 [2024-07-25 11:46:34.525709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.731 [2024-07-25 11:46:34.554948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.731 [2024-07-25 11:46:34.555161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:35.731 [2024-07-25 11:46:34.555281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.108 ms 00:21:35.731 [2024-07-25 11:46:34.555331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.731 [2024-07-25 11:46:34.555434] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:35.731 [2024-07-25 11:46:34.555546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.555608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.555663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.555727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.555783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.555947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:35.731 [2024-07-25 11:46:34.556693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.556987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:35.732 [2024-07-25 11:46:34.557450] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:35.732 [2024-07-25 11:46:34.557463] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d38b5acd-953a-4711-aafb-f6576962b114 00:21:35.732 [2024-07-25 11:46:34.557476] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:35.732 [2024-07-25 11:46:34.557488] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:35.732 [2024-07-25 11:46:34.557499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:35.732 [2024-07-25 11:46:34.557528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:35.732 [2024-07-25 11:46:34.557540] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:35.732 [2024-07-25 11:46:34.557553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:35.732 [2024-07-25 11:46:34.557564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:35.732 [2024-07-25 11:46:34.557577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:35.732 [2024-07-25 11:46:34.557589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:35.732 [2024-07-25 11:46:34.557601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.732 [2024-07-25 11:46:34.557614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:35.732 [2024-07-25 11:46:34.557634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:21:35.732 [2024-07-25 11:46:34.557646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.732 [2024-07-25 11:46:34.575146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.732 [2024-07-25 11:46:34.575320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:35.732 [2024-07-25 11:46:34.575436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.467 ms 00:21:35.732 [2024-07-25 11:46:34.575488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.732 [2024-07-25 11:46:34.576034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.732 [2024-07-25 11:46:34.576181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:35.732 [2024-07-25 11:46:34.576314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:21:35.732 [2024-07-25 11:46:34.576365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.732 [2024-07-25 11:46:34.618490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.732 [2024-07-25 11:46:34.618693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.732 [2024-07-25 11:46:34.618823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.732 [2024-07-25 11:46:34.618875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.732 [2024-07-25 11:46:34.619163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.732 [2024-07-25 11:46:34.619307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.732 [2024-07-25 11:46:34.619418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.732 [2024-07-25 11:46:34.619530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.732 [2024-07-25 11:46:34.619656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.732 [2024-07-25 11:46:34.619750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.732 [2024-07-25 11:46:34.619864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.732 [2024-07-25 11:46:34.619932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.732 [2024-07-25 11:46:34.620039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.732 [2024-07-25 11:46:34.620086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.732 [2024-07-25 11:46:34.620135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.732 [2024-07-25 11:46:34.620173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-07-25 11:46:34.718915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.733 [2024-07-25 11:46:34.719167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.733 [2024-07-25 11:46:34.719299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.733 [2024-07-25 11:46:34.719351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.991 [2024-07-25 11:46:34.805413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.991 [2024-07-25 11:46:34.805654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.991 [2024-07-25 11:46:34.805777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.991 [2024-07-25 11:46:34.805829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.991 [2024-07-25 11:46:34.805989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.991 [2024-07-25 11:46:34.806051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.991 [2024-07-25 11:46:34.806154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.991 [2024-07-25 11:46:34.806204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.991 [2024-07-25 11:46:34.806349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.991 [2024-07-25 11:46:34.806404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.991 [2024-07-25 11:46:34.806424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.991 [2024-07-25 11:46:34.806445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.991 [2024-07-25 11:46:34.806583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.991 [2024-07-25 11:46:34.806604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.991 [2024-07-25 11:46:34.806618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.991 [2024-07-25 11:46:34.806630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.991 [2024-07-25 11:46:34.806684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.991 [2024-07-25 11:46:34.806702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:35.991 [2024-07-25 11:46:34.806720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.991 [2024-07-25 11:46:34.806732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.991 [2024-07-25 11:46:34.806796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.991 [2024-07-25 11:46:34.806813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.991 [2024-07-25 11:46:34.806826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.991 [2024-07-25 11:46:34.806837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.991 [2024-07-25 11:46:34.806902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.991 [2024-07-25 11:46:34.806935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.991 [2024-07-25 11:46:34.806951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.991 [2024-07-25 11:46:34.806971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.991 [2024-07-25 11:46:34.807179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 441.310 ms, result 0 00:21:37.367 00:21:37.367 00:21:37.367 11:46:36 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=80077 00:21:37.367 11:46:36 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:37.367 11:46:36 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 80077 00:21:37.367 11:46:36 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 80077 ']' 00:21:37.367 11:46:36 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.367 11:46:36 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:37.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.367 11:46:36 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.367 11:46:36 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:37.367 11:46:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:37.367 [2024-07-25 11:46:36.243118] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:37.367 [2024-07-25 11:46:36.243298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80077 ] 00:21:37.625 [2024-07-25 11:46:36.420755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.625 [2024-07-25 11:46:36.646027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.581 11:46:37 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.581 11:46:37 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:21:38.581 11:46:37 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:38.839 [2024-07-25 11:46:37.715715] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:38.839 [2024-07-25 11:46:37.715812] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:39.099 [2024-07-25 11:46:37.902689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.902781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:39.099 [2024-07-25 11:46:37.902806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:39.099 [2024-07-25 11:46:37.902827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.906999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.907053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:39.099 [2024-07-25 11:46:37.907073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.135 ms 00:21:39.099 [2024-07-25 11:46:37.907092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.907245] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:39.099 [2024-07-25 11:46:37.908457] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:39.099 [2024-07-25 11:46:37.908502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.908526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:39.099 [2024-07-25 11:46:37.908542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:21:39.099 [2024-07-25 11:46:37.908567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.910796] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:39.099 [2024-07-25 11:46:37.928332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.928378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:39.099 [2024-07-25 11:46:37.928403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.532 ms 00:21:39.099 [2024-07-25 11:46:37.928417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.928546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.928569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:39.099 [2024-07-25 11:46:37.928587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:39.099 [2024-07-25 11:46:37.928600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.938120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.938169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:39.099 [2024-07-25 11:46:37.938196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.440 ms 00:21:39.099 [2024-07-25 11:46:37.938210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.938387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.938410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:39.099 [2024-07-25 11:46:37.938427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:39.099 [2024-07-25 11:46:37.938445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.938516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.938534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:39.099 [2024-07-25 11:46:37.938551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:39.099 [2024-07-25 11:46:37.938563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.938608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:39.099 [2024-07-25 11:46:37.943722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.943765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:39.099 [2024-07-25 11:46:37.943797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.131 ms 00:21:39.099 [2024-07-25 11:46:37.943813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.943885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.943912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:39.099 [2024-07-25 11:46:37.943929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:39.099 [2024-07-25 11:46:37.943994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.944032] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:39.099 [2024-07-25 11:46:37.944081] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:39.099 [2024-07-25 11:46:37.944136] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:39.099 [2024-07-25 11:46:37.944167] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:39.099 [2024-07-25 11:46:37.944294] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:39.099 [2024-07-25 11:46:37.944325] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:39.099 [2024-07-25 11:46:37.944342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:39.099 [2024-07-25 11:46:37.944363] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:39.099 [2024-07-25 11:46:37.944378] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:39.099 [2024-07-25 11:46:37.944395] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:39.099 [2024-07-25 11:46:37.944408] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:39.099 [2024-07-25 11:46:37.944424] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:39.099 [2024-07-25 11:46:37.944436] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:39.099 [2024-07-25 11:46:37.944463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.944476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:39.099 [2024-07-25 11:46:37.944492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:21:39.099 [2024-07-25 11:46:37.944507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.944608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.099 [2024-07-25 11:46:37.944631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:39.099 [2024-07-25 11:46:37.944648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:39.099 [2024-07-25 11:46:37.944661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.099 [2024-07-25 11:46:37.944792] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:39.099 [2024-07-25 11:46:37.944814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:39.099 [2024-07-25 11:46:37.944832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.099 [2024-07-25 11:46:37.944845] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.099 [2024-07-25 11:46:37.944867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:39.099 [2024-07-25 11:46:37.944879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:39.099 [2024-07-25 11:46:37.944894] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:39.099 [2024-07-25 11:46:37.944906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:39.099 [2024-07-25 11:46:37.944938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:39.099 [2024-07-25 11:46:37.944953] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.099 [2024-07-25 11:46:37.944968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:39.099 [2024-07-25 11:46:37.944980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:39.099 [2024-07-25 11:46:37.944994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.099 [2024-07-25 11:46:37.945005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:39.099 [2024-07-25 11:46:37.945019] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:39.099 [2024-07-25 11:46:37.945030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.099 [2024-07-25 11:46:37.945044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:39.099 [2024-07-25 11:46:37.945056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:39.099 [2024-07-25 11:46:37.945073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.099 [2024-07-25 11:46:37.945086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:39.099 [2024-07-25 11:46:37.945100] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:39.099 [2024-07-25 11:46:37.945112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.099 [2024-07-25 11:46:37.945130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:39.099 [2024-07-25 11:46:37.945151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:39.099 [2024-07-25 11:46:37.945168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.099 [2024-07-25 11:46:37.945180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:39.099 [2024-07-25 11:46:37.945194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:39.099 [2024-07-25 11:46:37.945218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.099 [2024-07-25 11:46:37.945236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:39.099 [2024-07-25 11:46:37.945249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:39.099 [2024-07-25 11:46:37.945263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.099 [2024-07-25 11:46:37.945275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:39.099 [2024-07-25 11:46:37.945289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:39.099 [2024-07-25 11:46:37.945302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.099 [2024-07-25 11:46:37.945317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:39.099 [2024-07-25 11:46:37.945329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:39.099 [2024-07-25 11:46:37.945343] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.099 [2024-07-25 11:46:37.945355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:39.100 [2024-07-25 11:46:37.945370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:39.100 [2024-07-25 11:46:37.945382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.100 [2024-07-25 11:46:37.945399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:39.100 [2024-07-25 11:46:37.945411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:39.100 [2024-07-25 11:46:37.945426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.100 [2024-07-25 11:46:37.945437] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:39.100 [2024-07-25 11:46:37.945454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:39.100 [2024-07-25 11:46:37.945467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.100 [2024-07-25 11:46:37.945482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.100 [2024-07-25 11:46:37.945496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:39.100 [2024-07-25 11:46:37.945511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:39.100 [2024-07-25 11:46:37.945523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:39.100 [2024-07-25 11:46:37.945538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:39.100 [2024-07-25 11:46:37.945550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:39.100 [2024-07-25 11:46:37.945566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:39.100 [2024-07-25 11:46:37.945579] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:39.100 [2024-07-25 11:46:37.945599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.100 [2024-07-25 11:46:37.945614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:39.100 [2024-07-25 11:46:37.945634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:39.100 [2024-07-25 11:46:37.945646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:39.100 [2024-07-25 11:46:37.945662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:39.100 [2024-07-25 11:46:37.945675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:39.100 [2024-07-25 11:46:37.945690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:39.100 [2024-07-25 11:46:37.945702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:39.100 [2024-07-25 11:46:37.945718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:39.100 [2024-07-25 11:46:37.945730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:39.100 [2024-07-25 11:46:37.945747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:39.100 [2024-07-25 11:46:37.945759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:39.100 [2024-07-25 11:46:37.945774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:39.100 [2024-07-25 11:46:37.945787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:39.100 [2024-07-25 11:46:37.945803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:39.100 [2024-07-25 11:46:37.945816] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:39.100 [2024-07-25 11:46:37.945844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.100 [2024-07-25 11:46:37.945857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:39.100 [2024-07-25 11:46:37.945876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:39.100 [2024-07-25 11:46:37.945889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:39.100 [2024-07-25 11:46:37.945935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:39.100 [2024-07-25 11:46:37.945955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:37.945972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:39.100 [2024-07-25 11:46:37.945986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:21:39.100 [2024-07-25 11:46:37.946006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:37.988589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:37.988704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:39.100 [2024-07-25 11:46:37.988736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.443 ms 00:21:39.100 [2024-07-25 11:46:37.988758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:37.989011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:37.989054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:39.100 [2024-07-25 11:46:37.989073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:39.100 [2024-07-25 11:46:37.989093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:38.037180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:38.037286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:39.100 [2024-07-25 11:46:38.037309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.049 ms 00:21:39.100 [2024-07-25 11:46:38.037329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:38.037519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:38.037551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:39.100 [2024-07-25 11:46:38.037568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:39.100 [2024-07-25 11:46:38.037588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:38.038255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:38.038306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:39.100 [2024-07-25 11:46:38.038325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:21:39.100 [2024-07-25 11:46:38.038355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:38.038549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:38.038584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:39.100 [2024-07-25 11:46:38.038600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:21:39.100 [2024-07-25 11:46:38.038619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:38.062754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:38.062825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:39.100 [2024-07-25 11:46:38.062847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.100 ms 00:21:39.100 [2024-07-25 11:46:38.062868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:38.081199] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:39.100 [2024-07-25 11:46:38.081276] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:39.100 [2024-07-25 11:46:38.081304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:38.081325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:39.100 [2024-07-25 11:46:38.081341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.229 ms 00:21:39.100 [2024-07-25 11:46:38.081375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:38.111588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:38.111657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:39.100 [2024-07-25 11:46:38.111677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.117 ms 00:21:39.100 [2024-07-25 11:46:38.111706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:38.127786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:38.127886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:39.100 [2024-07-25 11:46:38.127939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.983 ms 00:21:39.100 [2024-07-25 11:46:38.127968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:38.143249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:38.143300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:39.100 [2024-07-25 11:46:38.143320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.185 ms 00:21:39.100 [2024-07-25 11:46:38.143339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.100 [2024-07-25 11:46:38.144250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.100 [2024-07-25 11:46:38.144307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:39.100 [2024-07-25 11:46:38.144327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:21:39.100 [2024-07-25 11:46:38.144347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.359 [2024-07-25 11:46:38.239794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.359 [2024-07-25 11:46:38.239916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:39.359 [2024-07-25 11:46:38.239986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.407 ms 00:21:39.359 [2024-07-25 11:46:38.240009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.359 [2024-07-25 11:46:38.252002] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:39.359 [2024-07-25 11:46:38.274240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.359 [2024-07-25 11:46:38.274315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:39.359 [2024-07-25 11:46:38.274351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.010 ms 00:21:39.359 [2024-07-25 11:46:38.274366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.359 [2024-07-25 11:46:38.274579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.359 [2024-07-25 11:46:38.274602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:39.359 [2024-07-25 11:46:38.274624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:39.359 [2024-07-25 11:46:38.274638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.359 [2024-07-25 11:46:38.274730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.359 [2024-07-25 11:46:38.274748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:39.359 [2024-07-25 11:46:38.274776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:39.359 [2024-07-25 11:46:38.274790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.359 [2024-07-25 11:46:38.274836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.359 [2024-07-25 11:46:38.274853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:39.359 [2024-07-25 11:46:38.274872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:39.359 [2024-07-25 11:46:38.274886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.359 [2024-07-25 11:46:38.274959] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:39.359 [2024-07-25 11:46:38.274980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.359 [2024-07-25 11:46:38.275012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:39.359 [2024-07-25 11:46:38.275028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:39.359 [2024-07-25 11:46:38.275055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.359 [2024-07-25 11:46:38.307119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.359 [2024-07-25 11:46:38.307185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:39.359 [2024-07-25 11:46:38.307205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.028 ms 00:21:39.359 [2024-07-25 11:46:38.307225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.359 [2024-07-25 11:46:38.307426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.359 [2024-07-25 11:46:38.307471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:39.359 [2024-07-25 11:46:38.307492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:39.359 [2024-07-25 11:46:38.307511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.359 [2024-07-25 11:46:38.308828] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:39.359 [2024-07-25 11:46:38.313032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.707 ms, result 0 00:21:39.359 [2024-07-25 11:46:38.314155] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:39.359 Some configs were skipped because the RPC state that can call them passed over. 00:21:39.359 11:46:38 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:39.617 [2024-07-25 11:46:38.627826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.617 [2024-07-25 11:46:38.627892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:39.617 [2024-07-25 11:46:38.627940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:21:39.617 [2024-07-25 11:46:38.627957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.618 [2024-07-25 11:46:38.628028] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.791 ms, result 0 00:21:39.618 true 00:21:39.618 11:46:38 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:39.875 [2024-07-25 11:46:38.907503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.875 [2024-07-25 11:46:38.907589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:39.875 [2024-07-25 11:46:38.907614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:21:39.875 [2024-07-25 11:46:38.907635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.875 [2024-07-25 11:46:38.907698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.185 ms, result 0 00:21:39.875 true 00:21:40.134 11:46:38 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 80077 00:21:40.134 11:46:38 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 80077 ']' 00:21:40.134 11:46:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 80077 00:21:40.134 11:46:38 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:21:40.134 11:46:38 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.134 11:46:38 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80077 00:21:40.134 11:46:38 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:40.134 11:46:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:40.134 11:46:38 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80077' 00:21:40.134 killing process with pid 80077 00:21:40.134 11:46:38 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 80077 00:21:40.134 11:46:38 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 80077 00:21:41.070 [2024-07-25 11:46:40.035575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.035676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:41.070 [2024-07-25 11:46:40.035703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:41.070 [2024-07-25 11:46:40.035719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.070 [2024-07-25 11:46:40.035758] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:41.070 [2024-07-25 11:46:40.039516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.039559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:41.070 [2024-07-25 11:46:40.039577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.734 ms 00:21:41.070 [2024-07-25 11:46:40.039594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.070 [2024-07-25 11:46:40.039960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.040005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:41.070 [2024-07-25 11:46:40.040023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:21:41.070 [2024-07-25 11:46:40.040038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.070 [2024-07-25 11:46:40.044079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.044132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:41.070 [2024-07-25 11:46:40.044153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.013 ms 00:21:41.070 [2024-07-25 11:46:40.044177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.070 [2024-07-25 11:46:40.051795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.051855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:41.070 [2024-07-25 11:46:40.051873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.562 ms 00:21:41.070 [2024-07-25 11:46:40.051890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.070 [2024-07-25 11:46:40.064938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.064988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:41.070 [2024-07-25 11:46:40.065007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.943 ms 00:21:41.070 [2024-07-25 11:46:40.065024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.070 [2024-07-25 11:46:40.074352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.074426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:41.070 [2024-07-25 11:46:40.074444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.258 ms 00:21:41.070 [2024-07-25 11:46:40.074459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.070 [2024-07-25 11:46:40.074656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.074684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:41.070 [2024-07-25 11:46:40.074700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:21:41.070 [2024-07-25 11:46:40.074729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.070 [2024-07-25 11:46:40.087708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.087765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:41.070 [2024-07-25 11:46:40.087784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.950 ms 00:21:41.070 [2024-07-25 11:46:40.087804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.070 [2024-07-25 11:46:40.100479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.100537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:41.070 [2024-07-25 11:46:40.100555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.605 ms 00:21:41.070 [2024-07-25 11:46:40.100582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.070 [2024-07-25 11:46:40.112970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.070 [2024-07-25 11:46:40.113028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:41.070 [2024-07-25 11:46:40.113047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.318 ms 00:21:41.070 [2024-07-25 11:46:40.113066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.330 [2024-07-25 11:46:40.125489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.330 [2024-07-25 11:46:40.125547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:41.330 [2024-07-25 11:46:40.125567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.313 ms 00:21:41.330 [2024-07-25 11:46:40.125585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.330 [2024-07-25 11:46:40.125654] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:41.330 [2024-07-25 11:46:40.125691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.125999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.126972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.127002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.127021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.127046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.127064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.127089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:41.330 [2024-07-25 11:46:40.127111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:41.331 [2024-07-25 11:46:40.127542] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:41.331 [2024-07-25 11:46:40.127557] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d38b5acd-953a-4711-aafb-f6576962b114 00:21:41.331 [2024-07-25 11:46:40.127596] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:41.331 [2024-07-25 11:46:40.127618] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:41.331 [2024-07-25 11:46:40.127638] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:41.331 [2024-07-25 11:46:40.127653] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:41.331 [2024-07-25 11:46:40.127672] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:41.331 [2024-07-25 11:46:40.127686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:41.331 [2024-07-25 11:46:40.127704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:41.331 [2024-07-25 11:46:40.127716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:41.331 [2024-07-25 11:46:40.127751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:41.331 [2024-07-25 11:46:40.127766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.331 [2024-07-25 11:46:40.127785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:41.331 [2024-07-25 11:46:40.127800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.114 ms 00:21:41.331 [2024-07-25 11:46:40.127826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.331 [2024-07-25 11:46:40.145229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.331 [2024-07-25 11:46:40.145291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:41.331 [2024-07-25 11:46:40.145319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.355 ms 00:21:41.331 [2024-07-25 11:46:40.145356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.331 [2024-07-25 11:46:40.145960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.331 [2024-07-25 11:46:40.146006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:41.331 [2024-07-25 11:46:40.146031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:21:41.331 [2024-07-25 11:46:40.146050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.331 [2024-07-25 11:46:40.206687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.331 [2024-07-25 11:46:40.206757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.331 [2024-07-25 11:46:40.206778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.331 [2024-07-25 11:46:40.206795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.331 [2024-07-25 11:46:40.206959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.331 [2024-07-25 11:46:40.206985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.331 [2024-07-25 11:46:40.207003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.331 [2024-07-25 11:46:40.207020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.331 [2024-07-25 11:46:40.207093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.331 [2024-07-25 11:46:40.207129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.331 [2024-07-25 11:46:40.207145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.331 [2024-07-25 11:46:40.207171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.331 [2024-07-25 11:46:40.207214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.331 [2024-07-25 11:46:40.207249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.331 [2024-07-25 11:46:40.207265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.331 [2024-07-25 11:46:40.207291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.331 [2024-07-25 11:46:40.315922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.331 [2024-07-25 11:46:40.316034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.331 [2024-07-25 11:46:40.316059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.331 [2024-07-25 11:46:40.316080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.590 [2024-07-25 11:46:40.406625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.590 [2024-07-25 11:46:40.406720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.590 [2024-07-25 11:46:40.406751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.590 [2024-07-25 11:46:40.406771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.590 [2024-07-25 11:46:40.406942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.590 [2024-07-25 11:46:40.406976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.590 [2024-07-25 11:46:40.406993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.590 [2024-07-25 11:46:40.407018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.590 [2024-07-25 11:46:40.407065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.590 [2024-07-25 11:46:40.407092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.590 [2024-07-25 11:46:40.407107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.590 [2024-07-25 11:46:40.407126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.590 [2024-07-25 11:46:40.407267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.590 [2024-07-25 11:46:40.407308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.590 [2024-07-25 11:46:40.407325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.590 [2024-07-25 11:46:40.407345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.590 [2024-07-25 11:46:40.407411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.590 [2024-07-25 11:46:40.407441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:41.590 [2024-07-25 11:46:40.407456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.590 [2024-07-25 11:46:40.407475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.590 [2024-07-25 11:46:40.407551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.590 [2024-07-25 11:46:40.407587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.590 [2024-07-25 11:46:40.407603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.590 [2024-07-25 11:46:40.407628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.590 [2024-07-25 11:46:40.407697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.590 [2024-07-25 11:46:40.407734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.590 [2024-07-25 11:46:40.407750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.590 [2024-07-25 11:46:40.407769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.590 [2024-07-25 11:46:40.408019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.380 ms, result 0 00:21:42.524 11:46:41 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:42.524 11:46:41 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:42.524 [2024-07-25 11:46:41.533871] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:42.524 [2024-07-25 11:46:41.534065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80147 ] 00:21:42.783 [2024-07-25 11:46:41.704093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.102 [2024-07-25 11:46:42.000473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.376 [2024-07-25 11:46:42.366441] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.376 [2024-07-25 11:46:42.366531] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.636 [2024-07-25 11:46:42.532906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.533007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:43.636 [2024-07-25 11:46:42.533032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:43.636 [2024-07-25 11:46:42.533045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.536622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.536666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:43.636 [2024-07-25 11:46:42.536683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:21:43.636 [2024-07-25 11:46:42.536695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.536968] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:43.636 [2024-07-25 11:46:42.537954] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:43.636 [2024-07-25 11:46:42.538003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.538019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:43.636 [2024-07-25 11:46:42.538033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:21:43.636 [2024-07-25 11:46:42.538044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.540136] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:43.636 [2024-07-25 11:46:42.557152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.557197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:43.636 [2024-07-25 11:46:42.557222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.019 ms 00:21:43.636 [2024-07-25 11:46:42.557235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.557365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.557387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:43.636 [2024-07-25 11:46:42.557402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:43.636 [2024-07-25 11:46:42.557414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.566463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.566547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:43.636 [2024-07-25 11:46:42.566578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.981 ms 00:21:43.636 [2024-07-25 11:46:42.566601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.566826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.566861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:43.636 [2024-07-25 11:46:42.566895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:43.636 [2024-07-25 11:46:42.566942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.567028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.567065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:43.636 [2024-07-25 11:46:42.567100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:43.636 [2024-07-25 11:46:42.567122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.567188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:43.636 [2024-07-25 11:46:42.573527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.573576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:43.636 [2024-07-25 11:46:42.573593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.357 ms 00:21:43.636 [2024-07-25 11:46:42.573606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.573713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.573751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:43.636 [2024-07-25 11:46:42.573765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:43.636 [2024-07-25 11:46:42.573776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.573809] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:43.636 [2024-07-25 11:46:42.573851] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:43.636 [2024-07-25 11:46:42.573900] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:43.636 [2024-07-25 11:46:42.574023] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:43.636 [2024-07-25 11:46:42.574147] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:43.636 [2024-07-25 11:46:42.574165] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:43.636 [2024-07-25 11:46:42.574181] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:43.636 [2024-07-25 11:46:42.574197] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:43.636 [2024-07-25 11:46:42.574219] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:43.636 [2024-07-25 11:46:42.574240] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:43.636 [2024-07-25 11:46:42.574252] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:43.636 [2024-07-25 11:46:42.574264] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:43.636 [2024-07-25 11:46:42.574276] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:43.636 [2024-07-25 11:46:42.574290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.574302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:43.636 [2024-07-25 11:46:42.574315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:21:43.636 [2024-07-25 11:46:42.574326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.574427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.636 [2024-07-25 11:46:42.574449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:43.636 [2024-07-25 11:46:42.574468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:43.636 [2024-07-25 11:46:42.574480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.636 [2024-07-25 11:46:42.574595] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:43.636 [2024-07-25 11:46:42.574613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:43.636 [2024-07-25 11:46:42.574626] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.636 [2024-07-25 11:46:42.574638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.636 [2024-07-25 11:46:42.574650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:43.637 [2024-07-25 11:46:42.574660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:43.637 [2024-07-25 11:46:42.574670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:43.637 [2024-07-25 11:46:42.574681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:43.637 [2024-07-25 11:46:42.574692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:43.637 [2024-07-25 11:46:42.574702] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.637 [2024-07-25 11:46:42.574713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:43.637 [2024-07-25 11:46:42.574724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:43.637 [2024-07-25 11:46:42.574734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.637 [2024-07-25 11:46:42.574745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:43.637 [2024-07-25 11:46:42.574755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:43.637 [2024-07-25 11:46:42.574766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.637 [2024-07-25 11:46:42.574776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:43.637 [2024-07-25 11:46:42.574787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:43.637 [2024-07-25 11:46:42.574813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.637 [2024-07-25 11:46:42.574824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:43.637 [2024-07-25 11:46:42.574835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:43.637 [2024-07-25 11:46:42.574846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.637 [2024-07-25 11:46:42.574856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:43.637 [2024-07-25 11:46:42.574866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:43.637 [2024-07-25 11:46:42.574877] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.637 [2024-07-25 11:46:42.574887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:43.637 [2024-07-25 11:46:42.574897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:43.637 [2024-07-25 11:46:42.574908] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.637 [2024-07-25 11:46:42.574932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:43.637 [2024-07-25 11:46:42.574947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:43.637 [2024-07-25 11:46:42.574965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.637 [2024-07-25 11:46:42.574983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:43.637 [2024-07-25 11:46:42.575002] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:43.637 [2024-07-25 11:46:42.575016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.637 [2024-07-25 11:46:42.575028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:43.637 [2024-07-25 11:46:42.575047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:43.637 [2024-07-25 11:46:42.575058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.637 [2024-07-25 11:46:42.575077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:43.637 [2024-07-25 11:46:42.575088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:43.637 [2024-07-25 11:46:42.575099] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.637 [2024-07-25 11:46:42.575110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:43.637 [2024-07-25 11:46:42.575121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:43.637 [2024-07-25 11:46:42.575132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.637 [2024-07-25 11:46:42.575142] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:43.637 [2024-07-25 11:46:42.575154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:43.637 [2024-07-25 11:46:42.575166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.637 [2024-07-25 11:46:42.575177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.637 [2024-07-25 11:46:42.575196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:43.637 [2024-07-25 11:46:42.575208] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:43.637 [2024-07-25 11:46:42.575219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:43.637 [2024-07-25 11:46:42.575230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:43.637 [2024-07-25 11:46:42.575241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:43.637 [2024-07-25 11:46:42.575253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:43.637 [2024-07-25 11:46:42.575265] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:43.637 [2024-07-25 11:46:42.575281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.637 [2024-07-25 11:46:42.575302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:43.637 [2024-07-25 11:46:42.575314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:43.637 [2024-07-25 11:46:42.575326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:43.637 [2024-07-25 11:46:42.575338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:43.637 [2024-07-25 11:46:42.575350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:43.637 [2024-07-25 11:46:42.575362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:43.637 [2024-07-25 11:46:42.575373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:43.637 [2024-07-25 11:46:42.575385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:43.637 [2024-07-25 11:46:42.575397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:43.637 [2024-07-25 11:46:42.575409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:43.637 [2024-07-25 11:46:42.575421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:43.637 [2024-07-25 11:46:42.575433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:43.637 [2024-07-25 11:46:42.575445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:43.637 [2024-07-25 11:46:42.575456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:43.637 [2024-07-25 11:46:42.575467] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:43.637 [2024-07-25 11:46:42.575480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.637 [2024-07-25 11:46:42.575494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:43.637 [2024-07-25 11:46:42.575505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:43.637 [2024-07-25 11:46:42.575517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:43.637 [2024-07-25 11:46:42.575528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:43.637 [2024-07-25 11:46:42.575540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.637 [2024-07-25 11:46:42.575552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:43.637 [2024-07-25 11:46:42.575564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:21:43.637 [2024-07-25 11:46:42.575575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.637 [2024-07-25 11:46:42.623507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.637 [2024-07-25 11:46:42.623581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.637 [2024-07-25 11:46:42.623612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.846 ms 00:21:43.637 [2024-07-25 11:46:42.623630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.637 [2024-07-25 11:46:42.623900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.637 [2024-07-25 11:46:42.623950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:43.637 [2024-07-25 11:46:42.623967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:43.637 [2024-07-25 11:46:42.623979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.637 [2024-07-25 11:46:42.668486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.637 [2024-07-25 11:46:42.668559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.637 [2024-07-25 11:46:42.668590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.468 ms 00:21:43.637 [2024-07-25 11:46:42.668607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.637 [2024-07-25 11:46:42.668772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.637 [2024-07-25 11:46:42.668794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.637 [2024-07-25 11:46:42.668808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:43.637 [2024-07-25 11:46:42.668821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.637 [2024-07-25 11:46:42.669411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.637 [2024-07-25 11:46:42.669450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.637 [2024-07-25 11:46:42.669464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:21:43.637 [2024-07-25 11:46:42.669476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.637 [2024-07-25 11:46:42.669663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.637 [2024-07-25 11:46:42.669691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.637 [2024-07-25 11:46:42.669705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:21:43.637 [2024-07-25 11:46:42.669716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.896 [2024-07-25 11:46:42.689471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.896 [2024-07-25 11:46:42.689520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.896 [2024-07-25 11:46:42.689538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.723 ms 00:21:43.896 [2024-07-25 11:46:42.689551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.896 [2024-07-25 11:46:42.706943] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:43.896 [2024-07-25 11:46:42.706996] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:43.896 [2024-07-25 11:46:42.707016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.896 [2024-07-25 11:46:42.707029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:43.896 [2024-07-25 11:46:42.707043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.270 ms 00:21:43.896 [2024-07-25 11:46:42.707055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.896 [2024-07-25 11:46:42.737054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.896 [2024-07-25 11:46:42.737101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:43.896 [2024-07-25 11:46:42.737118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.887 ms 00:21:43.896 [2024-07-25 11:46:42.737132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.896 [2024-07-25 11:46:42.753178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.896 [2024-07-25 11:46:42.753239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:43.896 [2024-07-25 11:46:42.753257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.923 ms 00:21:43.896 [2024-07-25 11:46:42.753269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.896 [2024-07-25 11:46:42.768827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.896 [2024-07-25 11:46:42.768869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:43.896 [2024-07-25 11:46:42.768885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.459 ms 00:21:43.896 [2024-07-25 11:46:42.768896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.896 [2024-07-25 11:46:42.769761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.896 [2024-07-25 11:46:42.769800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:43.896 [2024-07-25 11:46:42.769821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:21:43.896 [2024-07-25 11:46:42.769838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.896 [2024-07-25 11:46:42.847730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.896 [2024-07-25 11:46:42.847855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:43.896 [2024-07-25 11:46:42.847879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.847 ms 00:21:43.896 [2024-07-25 11:46:42.847892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.896 [2024-07-25 11:46:42.860446] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:43.896 [2024-07-25 11:46:42.881140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.896 [2024-07-25 11:46:42.881221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:43.896 [2024-07-25 11:46:42.881244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.020 ms 00:21:43.896 [2024-07-25 11:46:42.881259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.896 [2024-07-25 11:46:42.881443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.896 [2024-07-25 11:46:42.881465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:43.897 [2024-07-25 11:46:42.881480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:43.897 [2024-07-25 11:46:42.881492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.897 [2024-07-25 11:46:42.881573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.897 [2024-07-25 11:46:42.881590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:43.897 [2024-07-25 11:46:42.881603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:43.897 [2024-07-25 11:46:42.881615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.897 [2024-07-25 11:46:42.881652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.897 [2024-07-25 11:46:42.881673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:43.897 [2024-07-25 11:46:42.881686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:43.897 [2024-07-25 11:46:42.881698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.897 [2024-07-25 11:46:42.881740] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:43.897 [2024-07-25 11:46:42.881756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.897 [2024-07-25 11:46:42.881769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:43.897 [2024-07-25 11:46:42.881781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:43.897 [2024-07-25 11:46:42.881793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.897 [2024-07-25 11:46:42.913343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.897 [2024-07-25 11:46:42.913398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:43.897 [2024-07-25 11:46:42.913416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.517 ms 00:21:43.897 [2024-07-25 11:46:42.913429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.897 [2024-07-25 11:46:42.913569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.897 [2024-07-25 11:46:42.913591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:43.897 [2024-07-25 11:46:42.913605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:43.897 [2024-07-25 11:46:42.913616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.897 [2024-07-25 11:46:42.914832] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:43.897 [2024-07-25 11:46:42.918838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 381.530 ms, result 0 00:21:43.897 [2024-07-25 11:46:42.919662] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:43.897 [2024-07-25 11:46:42.935616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:55.111  Copying: 29/256 [MB] (29 MBps) Copying: 53/256 [MB] (23 MBps) Copying: 78/256 [MB] (25 MBps) Copying: 102/256 [MB] (23 MBps) Copying: 124/256 [MB] (21 MBps) Copying: 145/256 [MB] (21 MBps) Copying: 166/256 [MB] (20 MBps) Copying: 188/256 [MB] (21 MBps) Copying: 210/256 [MB] (22 MBps) Copying: 234/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-25 11:46:53.838796] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:55.111 [2024-07-25 11:46:53.851874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:53.851982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:55.111 [2024-07-25 11:46:53.852006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:55.111 [2024-07-25 11:46:53.852019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:53.852073] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:55.111 [2024-07-25 11:46:53.855888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:53.855944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:55.111 [2024-07-25 11:46:53.855972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.791 ms 00:21:55.111 [2024-07-25 11:46:53.855984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:53.856309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:53.856337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:55.111 [2024-07-25 11:46:53.856352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:21:55.111 [2024-07-25 11:46:53.856365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:53.860051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:53.860081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:55.111 [2024-07-25 11:46:53.860109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.662 ms 00:21:55.111 [2024-07-25 11:46:53.860121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:53.867593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:53.867641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:55.111 [2024-07-25 11:46:53.867655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.445 ms 00:21:55.111 [2024-07-25 11:46:53.867667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:53.898880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:53.898949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:55.111 [2024-07-25 11:46:53.898968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.123 ms 00:21:55.111 [2024-07-25 11:46:53.898981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:53.916830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:53.916890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:55.111 [2024-07-25 11:46:53.916923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.782 ms 00:21:55.111 [2024-07-25 11:46:53.916957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:53.917144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:53.917167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:55.111 [2024-07-25 11:46:53.917181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:21:55.111 [2024-07-25 11:46:53.917193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:53.947380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:53.947436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:55.111 [2024-07-25 11:46:53.947469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.162 ms 00:21:55.111 [2024-07-25 11:46:53.947480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:53.977526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:53.977606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:55.111 [2024-07-25 11:46:53.977624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.973 ms 00:21:55.111 [2024-07-25 11:46:53.977636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:54.009519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:54.009602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:55.111 [2024-07-25 11:46:54.009638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.794 ms 00:21:55.111 [2024-07-25 11:46:54.009650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:54.040926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.111 [2024-07-25 11:46:54.040996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:55.111 [2024-07-25 11:46:54.041030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.140 ms 00:21:55.111 [2024-07-25 11:46:54.041077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.111 [2024-07-25 11:46:54.041147] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:55.111 [2024-07-25 11:46:54.041192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:55.111 [2024-07-25 11:46:54.041672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.041999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:55.112 [2024-07-25 11:46:54.042523] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:55.112 [2024-07-25 11:46:54.042536] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d38b5acd-953a-4711-aafb-f6576962b114 00:21:55.112 [2024-07-25 11:46:54.042549] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:55.112 [2024-07-25 11:46:54.042560] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:55.112 [2024-07-25 11:46:54.042593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:55.112 [2024-07-25 11:46:54.042606] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:55.112 [2024-07-25 11:46:54.042617] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:55.112 [2024-07-25 11:46:54.042630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:55.112 [2024-07-25 11:46:54.042642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:55.112 [2024-07-25 11:46:54.042652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:55.112 [2024-07-25 11:46:54.042663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:55.112 [2024-07-25 11:46:54.042674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.112 [2024-07-25 11:46:54.042687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:55.112 [2024-07-25 11:46:54.042721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.530 ms 00:21:55.112 [2024-07-25 11:46:54.042734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.112 [2024-07-25 11:46:54.059756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.112 [2024-07-25 11:46:54.059829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:55.112 [2024-07-25 11:46:54.059865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.992 ms 00:21:55.112 [2024-07-25 11:46:54.059877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.112 [2024-07-25 11:46:54.060480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.112 [2024-07-25 11:46:54.060531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:55.112 [2024-07-25 11:46:54.060547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:21:55.112 [2024-07-25 11:46:54.060570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.113 [2024-07-25 11:46:54.099804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.113 [2024-07-25 11:46:54.099901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:55.113 [2024-07-25 11:46:54.099946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.113 [2024-07-25 11:46:54.099960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.113 [2024-07-25 11:46:54.100169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.113 [2024-07-25 11:46:54.100193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:55.113 [2024-07-25 11:46:54.100206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.113 [2024-07-25 11:46:54.100217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.113 [2024-07-25 11:46:54.100354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.113 [2024-07-25 11:46:54.100374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:55.113 [2024-07-25 11:46:54.100388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.113 [2024-07-25 11:46:54.100399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.113 [2024-07-25 11:46:54.100428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.113 [2024-07-25 11:46:54.100443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:55.113 [2024-07-25 11:46:54.100462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.113 [2024-07-25 11:46:54.100474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.372 [2024-07-25 11:46:54.196714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.372 [2024-07-25 11:46:54.196815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.372 [2024-07-25 11:46:54.196851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.372 [2024-07-25 11:46:54.196864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.372 [2024-07-25 11:46:54.280425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.372 [2024-07-25 11:46:54.280546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.372 [2024-07-25 11:46:54.280568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.372 [2024-07-25 11:46:54.280581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.372 [2024-07-25 11:46:54.280793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.372 [2024-07-25 11:46:54.280824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.372 [2024-07-25 11:46:54.280838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.372 [2024-07-25 11:46:54.280850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.372 [2024-07-25 11:46:54.280902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.372 [2024-07-25 11:46:54.280917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.372 [2024-07-25 11:46:54.280930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.372 [2024-07-25 11:46:54.280973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.372 [2024-07-25 11:46:54.281132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.372 [2024-07-25 11:46:54.281152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.372 [2024-07-25 11:46:54.281177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.372 [2024-07-25 11:46:54.281188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.372 [2024-07-25 11:46:54.281251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.372 [2024-07-25 11:46:54.281282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:55.372 [2024-07-25 11:46:54.281296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.372 [2024-07-25 11:46:54.281308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.372 [2024-07-25 11:46:54.281372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.372 [2024-07-25 11:46:54.281387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.372 [2024-07-25 11:46:54.281399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.372 [2024-07-25 11:46:54.281411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.372 [2024-07-25 11:46:54.281476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.372 [2024-07-25 11:46:54.281493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.372 [2024-07-25 11:46:54.281505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.373 [2024-07-25 11:46:54.281522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.373 [2024-07-25 11:46:54.281735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 429.913 ms, result 0 00:21:56.749 00:21:56.749 00:21:56.749 11:46:55 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:21:56.749 11:46:55 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:57.317 11:46:56 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:57.317 [2024-07-25 11:46:56.173557] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:57.317 [2024-07-25 11:46:56.173792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80296 ] 00:21:57.317 [2024-07-25 11:46:56.351602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.576 [2024-07-25 11:46:56.588182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.144 [2024-07-25 11:46:56.956916] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.144 [2024-07-25 11:46:56.957095] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.144 [2024-07-25 11:46:57.127641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.127708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:58.144 [2024-07-25 11:46:57.127746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:58.144 [2024-07-25 11:46:57.127759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.131372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.131420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.144 [2024-07-25 11:46:57.131453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.584 ms 00:21:58.144 [2024-07-25 11:46:57.131465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.131681] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:58.144 [2024-07-25 11:46:57.132723] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:58.144 [2024-07-25 11:46:57.132772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.132797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.144 [2024-07-25 11:46:57.132810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:21:58.144 [2024-07-25 11:46:57.132822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.135050] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:58.144 [2024-07-25 11:46:57.151962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.152044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:58.144 [2024-07-25 11:46:57.152086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.906 ms 00:21:58.144 [2024-07-25 11:46:57.152099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.152294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.152317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:58.144 [2024-07-25 11:46:57.152335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:58.144 [2024-07-25 11:46:57.152348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.161403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.161465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.144 [2024-07-25 11:46:57.161499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.994 ms 00:21:58.144 [2024-07-25 11:46:57.161512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.161698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.161722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.144 [2024-07-25 11:46:57.161737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:58.144 [2024-07-25 11:46:57.161756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.161809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.161827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:58.144 [2024-07-25 11:46:57.161846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:58.144 [2024-07-25 11:46:57.161858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.161901] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:58.144 [2024-07-25 11:46:57.167186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.167228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.144 [2024-07-25 11:46:57.167261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.299 ms 00:21:58.144 [2024-07-25 11:46:57.167274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.167389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.167409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:58.144 [2024-07-25 11:46:57.167433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:58.144 [2024-07-25 11:46:57.167446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.167480] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:58.144 [2024-07-25 11:46:57.167516] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:58.144 [2024-07-25 11:46:57.167569] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:58.144 [2024-07-25 11:46:57.167594] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:58.144 [2024-07-25 11:46:57.167706] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:58.144 [2024-07-25 11:46:57.167735] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:58.144 [2024-07-25 11:46:57.167753] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:58.144 [2024-07-25 11:46:57.167770] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:58.144 [2024-07-25 11:46:57.167784] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:58.144 [2024-07-25 11:46:57.167811] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:58.144 [2024-07-25 11:46:57.167823] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:58.144 [2024-07-25 11:46:57.167835] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:58.144 [2024-07-25 11:46:57.167847] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:58.144 [2024-07-25 11:46:57.167861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.167873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:58.144 [2024-07-25 11:46:57.167886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:21:58.144 [2024-07-25 11:46:57.167897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.168010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.144 [2024-07-25 11:46:57.168034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:58.144 [2024-07-25 11:46:57.168054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:58.144 [2024-07-25 11:46:57.168067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.144 [2024-07-25 11:46:57.168196] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:58.144 [2024-07-25 11:46:57.168214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:58.144 [2024-07-25 11:46:57.168228] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.144 [2024-07-25 11:46:57.168239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.144 [2024-07-25 11:46:57.168252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:58.144 [2024-07-25 11:46:57.168262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:58.144 [2024-07-25 11:46:57.168286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:58.144 [2024-07-25 11:46:57.168298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:58.144 [2024-07-25 11:46:57.168309] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:58.144 [2024-07-25 11:46:57.168319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.144 [2024-07-25 11:46:57.168331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:58.144 [2024-07-25 11:46:57.168342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:58.144 [2024-07-25 11:46:57.168353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.144 [2024-07-25 11:46:57.168364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:58.144 [2024-07-25 11:46:57.168375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:58.144 [2024-07-25 11:46:57.168385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.144 [2024-07-25 11:46:57.168398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:58.144 [2024-07-25 11:46:57.168410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:58.144 [2024-07-25 11:46:57.168436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.144 [2024-07-25 11:46:57.168448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:58.144 [2024-07-25 11:46:57.168461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:58.144 [2024-07-25 11:46:57.168472] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.144 [2024-07-25 11:46:57.168484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:58.144 [2024-07-25 11:46:57.168496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:58.144 [2024-07-25 11:46:57.168507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.144 [2024-07-25 11:46:57.168518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:58.144 [2024-07-25 11:46:57.168529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:58.144 [2024-07-25 11:46:57.168540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.144 [2024-07-25 11:46:57.168551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:58.144 [2024-07-25 11:46:57.168562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:58.144 [2024-07-25 11:46:57.168574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.144 [2024-07-25 11:46:57.168584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:58.144 [2024-07-25 11:46:57.168596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:58.145 [2024-07-25 11:46:57.168609] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.145 [2024-07-25 11:46:57.168621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:58.145 [2024-07-25 11:46:57.168632] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:58.145 [2024-07-25 11:46:57.168643] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.145 [2024-07-25 11:46:57.168654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:58.145 [2024-07-25 11:46:57.168666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:58.145 [2024-07-25 11:46:57.168676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.145 [2024-07-25 11:46:57.168696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:58.145 [2024-07-25 11:46:57.168707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:58.145 [2024-07-25 11:46:57.168718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.145 [2024-07-25 11:46:57.168729] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:58.145 [2024-07-25 11:46:57.168741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:58.145 [2024-07-25 11:46:57.168753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.145 [2024-07-25 11:46:57.168766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.145 [2024-07-25 11:46:57.168784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:58.145 [2024-07-25 11:46:57.168799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:58.145 [2024-07-25 11:46:57.168811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:58.145 [2024-07-25 11:46:57.168823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:58.145 [2024-07-25 11:46:57.168834] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:58.145 [2024-07-25 11:46:57.168845] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:58.145 [2024-07-25 11:46:57.168858] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:58.145 [2024-07-25 11:46:57.168874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.145 [2024-07-25 11:46:57.168888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:58.145 [2024-07-25 11:46:57.168901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:58.145 [2024-07-25 11:46:57.168913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:58.145 [2024-07-25 11:46:57.168940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:58.145 [2024-07-25 11:46:57.168958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:58.145 [2024-07-25 11:46:57.168971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:58.145 [2024-07-25 11:46:57.168990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:58.145 [2024-07-25 11:46:57.169003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:58.145 [2024-07-25 11:46:57.169015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:58.145 [2024-07-25 11:46:57.169027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:58.145 [2024-07-25 11:46:57.169039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:58.145 [2024-07-25 11:46:57.169051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:58.145 [2024-07-25 11:46:57.169063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:58.145 [2024-07-25 11:46:57.169076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:58.145 [2024-07-25 11:46:57.169088] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:58.145 [2024-07-25 11:46:57.169102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.145 [2024-07-25 11:46:57.169116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:58.145 [2024-07-25 11:46:57.169129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:58.145 [2024-07-25 11:46:57.169141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:58.145 [2024-07-25 11:46:57.169153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:58.145 [2024-07-25 11:46:57.169166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.145 [2024-07-25 11:46:57.169180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:58.145 [2024-07-25 11:46:57.169192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:21:58.145 [2024-07-25 11:46:57.169204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.220648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.220726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.404 [2024-07-25 11:46:57.220767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.355 ms 00:21:58.404 [2024-07-25 11:46:57.220781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.221085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.221123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:58.404 [2024-07-25 11:46:57.221140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:58.404 [2024-07-25 11:46:57.221154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.265852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.265943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.404 [2024-07-25 11:46:57.265966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.656 ms 00:21:58.404 [2024-07-25 11:46:57.265985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.266194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.266226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.404 [2024-07-25 11:46:57.266243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:58.404 [2024-07-25 11:46:57.266259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.266875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.266910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.404 [2024-07-25 11:46:57.266939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:21:58.404 [2024-07-25 11:46:57.266953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.267164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.267193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.404 [2024-07-25 11:46:57.267208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:21:58.404 [2024-07-25 11:46:57.267221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.286975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.287022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.404 [2024-07-25 11:46:57.287056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.721 ms 00:21:58.404 [2024-07-25 11:46:57.287068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.303368] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:58.404 [2024-07-25 11:46:57.303426] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:58.404 [2024-07-25 11:46:57.303469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.303490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:58.404 [2024-07-25 11:46:57.303504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.185 ms 00:21:58.404 [2024-07-25 11:46:57.303522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.331790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.331834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:58.404 [2024-07-25 11:46:57.331868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.101 ms 00:21:58.404 [2024-07-25 11:46:57.331880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.346633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.346675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:58.404 [2024-07-25 11:46:57.346707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.583 ms 00:21:58.404 [2024-07-25 11:46:57.346718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.361298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.361340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:58.404 [2024-07-25 11:46:57.361372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.486 ms 00:21:58.404 [2024-07-25 11:46:57.361384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.362239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.362274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:58.404 [2024-07-25 11:46:57.362290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:21:58.404 [2024-07-25 11:46:57.362303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.404 [2024-07-25 11:46:57.442397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.404 [2024-07-25 11:46:57.442506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:58.404 [2024-07-25 11:46:57.442539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.045 ms 00:21:58.404 [2024-07-25 11:46:57.442553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.663 [2024-07-25 11:46:57.455853] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:58.663 [2024-07-25 11:46:57.477611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.663 [2024-07-25 11:46:57.477710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:58.663 [2024-07-25 11:46:57.477749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.796 ms 00:21:58.663 [2024-07-25 11:46:57.477770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.663 [2024-07-25 11:46:57.478122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.663 [2024-07-25 11:46:57.478160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:58.663 [2024-07-25 11:46:57.478177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:58.663 [2024-07-25 11:46:57.478189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.663 [2024-07-25 11:46:57.478280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.663 [2024-07-25 11:46:57.478311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:58.663 [2024-07-25 11:46:57.478326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:58.663 [2024-07-25 11:46:57.478339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.663 [2024-07-25 11:46:57.478379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.663 [2024-07-25 11:46:57.478402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:58.663 [2024-07-25 11:46:57.478417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:58.663 [2024-07-25 11:46:57.478429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.663 [2024-07-25 11:46:57.478479] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:58.663 [2024-07-25 11:46:57.478507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.663 [2024-07-25 11:46:57.478521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:58.663 [2024-07-25 11:46:57.478535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:58.663 [2024-07-25 11:46:57.478547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.663 [2024-07-25 11:46:57.509887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.663 [2024-07-25 11:46:57.509969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:58.663 [2024-07-25 11:46:57.510004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.303 ms 00:21:58.663 [2024-07-25 11:46:57.510017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.663 [2024-07-25 11:46:57.510168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.663 [2024-07-25 11:46:57.510191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:58.663 [2024-07-25 11:46:57.510205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:58.663 [2024-07-25 11:46:57.510218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.663 [2024-07-25 11:46:57.511458] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:58.663 [2024-07-25 11:46:57.515593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.409 ms, result 0 00:21:58.663 [2024-07-25 11:46:57.516464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.663 [2024-07-25 11:46:57.532657] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:58.663  Copying: 4096/4096 [kB] (average 23 MBps)[2024-07-25 11:46:57.704334] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.923 [2024-07-25 11:46:57.717450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.923 [2024-07-25 11:46:57.717523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:58.923 [2024-07-25 11:46:57.717560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:58.923 [2024-07-25 11:46:57.717585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.923 [2024-07-25 11:46:57.717638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:58.923 [2024-07-25 11:46:57.721583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.923 [2024-07-25 11:46:57.721636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:58.923 [2024-07-25 11:46:57.721667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.923 ms 00:21:58.923 [2024-07-25 11:46:57.721680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.923 [2024-07-25 11:46:57.723591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.923 [2024-07-25 11:46:57.723647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:58.923 [2024-07-25 11:46:57.723681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.879 ms 00:21:58.924 [2024-07-25 11:46:57.723693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.924 [2024-07-25 11:46:57.727938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.924 [2024-07-25 11:46:57.728010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:58.924 [2024-07-25 11:46:57.728036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.219 ms 00:21:58.924 [2024-07-25 11:46:57.728053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.924 [2024-07-25 11:46:57.735438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.924 [2024-07-25 11:46:57.735496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:58.924 [2024-07-25 11:46:57.735528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.340 ms 00:21:58.924 [2024-07-25 11:46:57.735541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.924 [2024-07-25 11:46:57.765235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.924 [2024-07-25 11:46:57.765294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:58.924 [2024-07-25 11:46:57.765328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.616 ms 00:21:58.924 [2024-07-25 11:46:57.765340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.924 [2024-07-25 11:46:57.782713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.924 [2024-07-25 11:46:57.782776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:58.924 [2024-07-25 11:46:57.782809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.290 ms 00:21:58.924 [2024-07-25 11:46:57.782829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.924 [2024-07-25 11:46:57.783014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.924 [2024-07-25 11:46:57.783038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:58.924 [2024-07-25 11:46:57.783051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:21:58.924 [2024-07-25 11:46:57.783080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.924 [2024-07-25 11:46:57.813204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.924 [2024-07-25 11:46:57.813262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:58.924 [2024-07-25 11:46:57.813294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.099 ms 00:21:58.924 [2024-07-25 11:46:57.813305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.924 [2024-07-25 11:46:57.843165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.924 [2024-07-25 11:46:57.843224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:58.924 [2024-07-25 11:46:57.843257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.795 ms 00:21:58.924 [2024-07-25 11:46:57.843269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.924 [2024-07-25 11:46:57.871870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.924 [2024-07-25 11:46:57.871957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:58.924 [2024-07-25 11:46:57.871975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.521 ms 00:21:58.924 [2024-07-25 11:46:57.871987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.924 [2024-07-25 11:46:57.900467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.924 [2024-07-25 11:46:57.900512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:58.924 [2024-07-25 11:46:57.900544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.367 ms 00:21:58.924 [2024-07-25 11:46:57.900556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.924 [2024-07-25 11:46:57.900667] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:58.924 [2024-07-25 11:46:57.900696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.900988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:58.924 [2024-07-25 11:46:57.901307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.901990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.902003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.902017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.902030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:58.925 [2024-07-25 11:46:57.902065] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:58.925 [2024-07-25 11:46:57.902077] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d38b5acd-953a-4711-aafb-f6576962b114 00:21:58.925 [2024-07-25 11:46:57.902091] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:58.925 [2024-07-25 11:46:57.902103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:58.925 [2024-07-25 11:46:57.902131] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:58.925 [2024-07-25 11:46:57.902144] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:58.925 [2024-07-25 11:46:57.902156] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:58.925 [2024-07-25 11:46:57.902168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:58.925 [2024-07-25 11:46:57.902180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:58.925 [2024-07-25 11:46:57.902191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:58.925 [2024-07-25 11:46:57.902213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:58.925 [2024-07-25 11:46:57.902225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.925 [2024-07-25 11:46:57.902238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:58.925 [2024-07-25 11:46:57.902257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:21:58.925 [2024-07-25 11:46:57.902269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.925 [2024-07-25 11:46:57.919238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.925 [2024-07-25 11:46:57.919296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:58.925 [2024-07-25 11:46:57.919328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.941 ms 00:21:58.925 [2024-07-25 11:46:57.919340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.925 [2024-07-25 11:46:57.919851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.925 [2024-07-25 11:46:57.919887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:58.925 [2024-07-25 11:46:57.919903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:21:58.925 [2024-07-25 11:46:57.919915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.925 [2024-07-25 11:46:57.959927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.925 [2024-07-25 11:46:57.959990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.925 [2024-07-25 11:46:57.960022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.925 [2024-07-25 11:46:57.960047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.925 [2024-07-25 11:46:57.960180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.925 [2024-07-25 11:46:57.960199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.925 [2024-07-25 11:46:57.960212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.925 [2024-07-25 11:46:57.960224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.925 [2024-07-25 11:46:57.960312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.925 [2024-07-25 11:46:57.960333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.925 [2024-07-25 11:46:57.960346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.925 [2024-07-25 11:46:57.960359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.925 [2024-07-25 11:46:57.960385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.925 [2024-07-25 11:46:57.960409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.925 [2024-07-25 11:46:57.960421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.925 [2024-07-25 11:46:57.960433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-07-25 11:46:58.065618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.185 [2024-07-25 11:46:58.065715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.185 [2024-07-25 11:46:58.065735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.185 [2024-07-25 11:46:58.065751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-07-25 11:46:58.153505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.185 [2024-07-25 11:46:58.153595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:59.185 [2024-07-25 11:46:58.153631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.185 [2024-07-25 11:46:58.153645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-07-25 11:46:58.153741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.185 [2024-07-25 11:46:58.153772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:59.185 [2024-07-25 11:46:58.153786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.185 [2024-07-25 11:46:58.153798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-07-25 11:46:58.153842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.185 [2024-07-25 11:46:58.153857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:59.185 [2024-07-25 11:46:58.153870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.185 [2024-07-25 11:46:58.153889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-07-25 11:46:58.154049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.185 [2024-07-25 11:46:58.154077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:59.185 [2024-07-25 11:46:58.154091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.185 [2024-07-25 11:46:58.154103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-07-25 11:46:58.154159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.185 [2024-07-25 11:46:58.154178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:59.185 [2024-07-25 11:46:58.154192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.185 [2024-07-25 11:46:58.154211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-07-25 11:46:58.154271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.185 [2024-07-25 11:46:58.154290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:59.185 [2024-07-25 11:46:58.154303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.185 [2024-07-25 11:46:58.154316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-07-25 11:46:58.154382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.185 [2024-07-25 11:46:58.154402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:59.185 [2024-07-25 11:46:58.154415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.185 [2024-07-25 11:46:58.154444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-07-25 11:46:58.154654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 437.194 ms, result 0 00:22:00.655 00:22:00.655 00:22:00.655 11:46:59 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=80327 00:22:00.655 11:46:59 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:00.655 11:46:59 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 80327 00:22:00.655 11:46:59 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 80327 ']' 00:22:00.655 11:46:59 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.655 11:46:59 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.655 11:46:59 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.656 11:46:59 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.656 11:46:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:00.656 [2024-07-25 11:46:59.458872] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:00.656 [2024-07-25 11:46:59.459121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80327 ] 00:22:00.656 [2024-07-25 11:46:59.633351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.915 [2024-07-25 11:46:59.882589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.848 11:47:00 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.848 11:47:00 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:22:01.848 11:47:00 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:02.105 [2024-07-25 11:47:00.960861] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.105 [2024-07-25 11:47:00.960976] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.105 [2024-07-25 11:47:01.120856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.105 [2024-07-25 11:47:01.120955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.105 [2024-07-25 11:47:01.120980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:02.105 [2024-07-25 11:47:01.120996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.105 [2024-07-25 11:47:01.124558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.105 [2024-07-25 11:47:01.124611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.105 [2024-07-25 11:47:01.124628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.529 ms 00:22:02.105 [2024-07-25 11:47:01.124644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.105 [2024-07-25 11:47:01.124794] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.106 [2024-07-25 11:47:01.125738] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.106 [2024-07-25 11:47:01.125778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.106 [2024-07-25 11:47:01.125796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.106 [2024-07-25 11:47:01.125812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:22:02.106 [2024-07-25 11:47:01.125830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.106 [2024-07-25 11:47:01.127994] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:02.106 [2024-07-25 11:47:01.145713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.106 [2024-07-25 11:47:01.145762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:02.106 [2024-07-25 11:47:01.145784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.711 ms 00:22:02.106 [2024-07-25 11:47:01.145797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.106 [2024-07-25 11:47:01.145949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.106 [2024-07-25 11:47:01.145973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:02.106 [2024-07-25 11:47:01.145989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:02.106 [2024-07-25 11:47:01.146002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.106 [2024-07-25 11:47:01.155175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.106 [2024-07-25 11:47:01.155231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.106 [2024-07-25 11:47:01.155263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.099 ms 00:22:02.106 [2024-07-25 11:47:01.155276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.106 [2024-07-25 11:47:01.155438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.106 [2024-07-25 11:47:01.155460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.106 [2024-07-25 11:47:01.155486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:02.106 [2024-07-25 11:47:01.155506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.106 [2024-07-25 11:47:01.155555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.106 [2024-07-25 11:47:01.155571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.106 [2024-07-25 11:47:01.155586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:02.106 [2024-07-25 11:47:01.155598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.106 [2024-07-25 11:47:01.155639] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:02.364 [2024-07-25 11:47:01.160781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.364 [2024-07-25 11:47:01.160847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.364 [2024-07-25 11:47:01.160865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.156 ms 00:22:02.364 [2024-07-25 11:47:01.160880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.364 [2024-07-25 11:47:01.161003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.364 [2024-07-25 11:47:01.161032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.364 [2024-07-25 11:47:01.161050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:02.364 [2024-07-25 11:47:01.161064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.364 [2024-07-25 11:47:01.161097] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:02.364 [2024-07-25 11:47:01.161131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:02.365 [2024-07-25 11:47:01.161187] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:02.365 [2024-07-25 11:47:01.161217] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:02.365 [2024-07-25 11:47:01.161327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.365 [2024-07-25 11:47:01.161353] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.365 [2024-07-25 11:47:01.161369] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:02.365 [2024-07-25 11:47:01.161392] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.365 [2024-07-25 11:47:01.161407] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.365 [2024-07-25 11:47:01.161422] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:02.365 [2024-07-25 11:47:01.161434] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.365 [2024-07-25 11:47:01.161448] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.365 [2024-07-25 11:47:01.161460] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.365 [2024-07-25 11:47:01.161477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.365 [2024-07-25 11:47:01.161490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.365 [2024-07-25 11:47:01.161508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:22:02.365 [2024-07-25 11:47:01.161523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.365 [2024-07-25 11:47:01.161622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.365 [2024-07-25 11:47:01.161638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.365 [2024-07-25 11:47:01.161652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:02.365 [2024-07-25 11:47:01.161664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.365 [2024-07-25 11:47:01.161790] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.365 [2024-07-25 11:47:01.161817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.365 [2024-07-25 11:47:01.161833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.365 [2024-07-25 11:47:01.161846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.365 [2024-07-25 11:47:01.161865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.365 [2024-07-25 11:47:01.161883] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.365 [2024-07-25 11:47:01.161907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:02.365 [2024-07-25 11:47:01.161945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.365 [2024-07-25 11:47:01.161965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:02.365 [2024-07-25 11:47:01.161977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.365 [2024-07-25 11:47:01.161990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.365 [2024-07-25 11:47:01.162002] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:02.365 [2024-07-25 11:47:01.162015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.365 [2024-07-25 11:47:01.162035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.365 [2024-07-25 11:47:01.162048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:02.365 [2024-07-25 11:47:01.162058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.365 [2024-07-25 11:47:01.162071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.365 [2024-07-25 11:47:01.162082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:02.365 [2024-07-25 11:47:01.162095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.365 [2024-07-25 11:47:01.162106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.365 [2024-07-25 11:47:01.162123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:02.365 [2024-07-25 11:47:01.162135] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.365 [2024-07-25 11:47:01.162149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.365 [2024-07-25 11:47:01.162160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:02.365 [2024-07-25 11:47:01.162175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.365 [2024-07-25 11:47:01.162186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.365 [2024-07-25 11:47:01.162200] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:02.365 [2024-07-25 11:47:01.162223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.365 [2024-07-25 11:47:01.162239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.365 [2024-07-25 11:47:01.162250] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:02.365 [2024-07-25 11:47:01.162263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.365 [2024-07-25 11:47:01.162274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.365 [2024-07-25 11:47:01.162287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:02.365 [2024-07-25 11:47:01.162298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.365 [2024-07-25 11:47:01.162311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.365 [2024-07-25 11:47:01.162325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:02.365 [2024-07-25 11:47:01.162348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.365 [2024-07-25 11:47:01.162368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.365 [2024-07-25 11:47:01.162388] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:02.365 [2024-07-25 11:47:01.162406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.365 [2024-07-25 11:47:01.162439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.365 [2024-07-25 11:47:01.162452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:02.365 [2024-07-25 11:47:01.162466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.365 [2024-07-25 11:47:01.162476] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.365 [2024-07-25 11:47:01.162491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.365 [2024-07-25 11:47:01.162502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.365 [2024-07-25 11:47:01.162516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.365 [2024-07-25 11:47:01.162528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.365 [2024-07-25 11:47:01.162541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.365 [2024-07-25 11:47:01.162553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.365 [2024-07-25 11:47:01.162566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.365 [2024-07-25 11:47:01.162577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.365 [2024-07-25 11:47:01.162592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.365 [2024-07-25 11:47:01.162606] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.365 [2024-07-25 11:47:01.162624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.365 [2024-07-25 11:47:01.162638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:02.365 [2024-07-25 11:47:01.162656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:02.365 [2024-07-25 11:47:01.162668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:02.365 [2024-07-25 11:47:01.162683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:02.365 [2024-07-25 11:47:01.162694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:02.365 [2024-07-25 11:47:01.162708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:02.366 [2024-07-25 11:47:01.162719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:02.366 [2024-07-25 11:47:01.162733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:02.366 [2024-07-25 11:47:01.162745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:02.366 [2024-07-25 11:47:01.162758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:02.366 [2024-07-25 11:47:01.162770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:02.366 [2024-07-25 11:47:01.162784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:02.366 [2024-07-25 11:47:01.162796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:02.366 [2024-07-25 11:47:01.162810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:02.366 [2024-07-25 11:47:01.162822] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.366 [2024-07-25 11:47:01.162837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.366 [2024-07-25 11:47:01.162850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.366 [2024-07-25 11:47:01.162866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.366 [2024-07-25 11:47:01.162878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.366 [2024-07-25 11:47:01.162893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.366 [2024-07-25 11:47:01.162906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.162935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.366 [2024-07-25 11:47:01.162950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:22:02.366 [2024-07-25 11:47:01.162968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.204353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.204440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.366 [2024-07-25 11:47:01.204468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.290 ms 00:22:02.366 [2024-07-25 11:47:01.204485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.204724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.204750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:02.366 [2024-07-25 11:47:01.204765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:02.366 [2024-07-25 11:47:01.204780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.249000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.249084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.366 [2024-07-25 11:47:01.249107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.185 ms 00:22:02.366 [2024-07-25 11:47:01.249123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.249301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.249326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.366 [2024-07-25 11:47:01.249341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:02.366 [2024-07-25 11:47:01.249356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.249958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.250002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.366 [2024-07-25 11:47:01.250018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:22:02.366 [2024-07-25 11:47:01.250033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.250214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.250239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.366 [2024-07-25 11:47:01.250252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:22:02.366 [2024-07-25 11:47:01.250267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.272107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.272199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.366 [2024-07-25 11:47:01.272222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.804 ms 00:22:02.366 [2024-07-25 11:47:01.272239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.289970] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:02.366 [2024-07-25 11:47:01.290024] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:02.366 [2024-07-25 11:47:01.290047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.290063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:02.366 [2024-07-25 11:47:01.290078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.569 ms 00:22:02.366 [2024-07-25 11:47:01.290093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.319963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.320015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:02.366 [2024-07-25 11:47:01.320034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.769 ms 00:22:02.366 [2024-07-25 11:47:01.320053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.335834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.335885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:02.366 [2024-07-25 11:47:01.335915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.679 ms 00:22:02.366 [2024-07-25 11:47:01.335948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.351291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.351356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:02.366 [2024-07-25 11:47:01.351373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.247 ms 00:22:02.366 [2024-07-25 11:47:01.351387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.366 [2024-07-25 11:47:01.352346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.366 [2024-07-25 11:47:01.352387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:02.366 [2024-07-25 11:47:01.352404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:22:02.366 [2024-07-25 11:47:01.352419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.624 [2024-07-25 11:47:01.439258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.624 [2024-07-25 11:47:01.439369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:02.624 [2024-07-25 11:47:01.439394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.800 ms 00:22:02.624 [2024-07-25 11:47:01.439412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.624 [2024-07-25 11:47:01.452414] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:02.624 [2024-07-25 11:47:01.474516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.624 [2024-07-25 11:47:01.474629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:02.624 [2024-07-25 11:47:01.474658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.871 ms 00:22:02.624 [2024-07-25 11:47:01.474673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.624 [2024-07-25 11:47:01.474887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.624 [2024-07-25 11:47:01.474908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:02.624 [2024-07-25 11:47:01.474953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:02.624 [2024-07-25 11:47:01.474969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.624 [2024-07-25 11:47:01.475070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.624 [2024-07-25 11:47:01.475088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:02.624 [2024-07-25 11:47:01.475106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:02.624 [2024-07-25 11:47:01.475119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.624 [2024-07-25 11:47:01.475158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.624 [2024-07-25 11:47:01.475173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:02.624 [2024-07-25 11:47:01.475188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:02.624 [2024-07-25 11:47:01.475200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.624 [2024-07-25 11:47:01.475258] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:02.624 [2024-07-25 11:47:01.475275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.624 [2024-07-25 11:47:01.475293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:02.624 [2024-07-25 11:47:01.475305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:02.624 [2024-07-25 11:47:01.475322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.624 [2024-07-25 11:47:01.508492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.624 [2024-07-25 11:47:01.508560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:02.625 [2024-07-25 11:47:01.508580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.135 ms 00:22:02.625 [2024-07-25 11:47:01.508596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.625 [2024-07-25 11:47:01.508739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.625 [2024-07-25 11:47:01.508770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:02.625 [2024-07-25 11:47:01.508788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:02.625 [2024-07-25 11:47:01.508802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.625 [2024-07-25 11:47:01.510067] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.625 [2024-07-25 11:47:01.514285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 388.797 ms, result 0 00:22:02.625 [2024-07-25 11:47:01.516082] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:02.625 Some configs were skipped because the RPC state that can call them passed over. 00:22:02.625 11:47:01 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:02.882 [2024-07-25 11:47:01.826401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.882 [2024-07-25 11:47:01.826477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:02.883 [2024-07-25 11:47:01.826514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:22:02.883 [2024-07-25 11:47:01.826530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.883 [2024-07-25 11:47:01.826589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.784 ms, result 0 00:22:02.883 true 00:22:02.883 11:47:01 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:03.141 [2024-07-25 11:47:02.054165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.141 [2024-07-25 11:47:02.054241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:03.141 [2024-07-25 11:47:02.054264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.118 ms 00:22:03.141 [2024-07-25 11:47:02.054279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.141 [2024-07-25 11:47:02.054333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.303 ms, result 0 00:22:03.141 true 00:22:03.141 11:47:02 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 80327 00:22:03.141 11:47:02 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 80327 ']' 00:22:03.141 11:47:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 80327 00:22:03.141 11:47:02 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:22:03.141 11:47:02 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.141 11:47:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80327 00:22:03.141 killing process with pid 80327 00:22:03.141 11:47:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:03.141 11:47:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:03.141 11:47:02 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80327' 00:22:03.141 11:47:02 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 80327 00:22:03.141 11:47:02 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 80327 00:22:04.518 [2024-07-25 11:47:03.282528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.518 [2024-07-25 11:47:03.282613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:04.518 [2024-07-25 11:47:03.282655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:04.518 [2024-07-25 11:47:03.282684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.518 [2024-07-25 11:47:03.282744] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:04.518 [2024-07-25 11:47:03.287047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.518 [2024-07-25 11:47:03.287103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:04.518 [2024-07-25 11:47:03.287129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.268 ms 00:22:04.518 [2024-07-25 11:47:03.287160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.518 [2024-07-25 11:47:03.287597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.518 [2024-07-25 11:47:03.287642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:04.518 [2024-07-25 11:47:03.287659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:22:04.518 [2024-07-25 11:47:03.287674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.518 [2024-07-25 11:47:03.292293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.518 [2024-07-25 11:47:03.292352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:04.518 [2024-07-25 11:47:03.292371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:22:04.518 [2024-07-25 11:47:03.292395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.518 [2024-07-25 11:47:03.301171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.518 [2024-07-25 11:47:03.301221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:04.518 [2024-07-25 11:47:03.301247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.685 ms 00:22:04.518 [2024-07-25 11:47:03.301277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.518 [2024-07-25 11:47:03.314654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.518 [2024-07-25 11:47:03.314719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:04.518 [2024-07-25 11:47:03.314736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.263 ms 00:22:04.519 [2024-07-25 11:47:03.314753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.519 [2024-07-25 11:47:03.323427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.519 [2024-07-25 11:47:03.323481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:04.519 [2024-07-25 11:47:03.323498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.567 ms 00:22:04.519 [2024-07-25 11:47:03.323512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.519 [2024-07-25 11:47:03.323696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.519 [2024-07-25 11:47:03.323731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:04.519 [2024-07-25 11:47:03.323752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:04.519 [2024-07-25 11:47:03.323781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.519 [2024-07-25 11:47:03.336394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.519 [2024-07-25 11:47:03.336443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:04.519 [2024-07-25 11:47:03.336469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.586 ms 00:22:04.519 [2024-07-25 11:47:03.336483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.519 [2024-07-25 11:47:03.348837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.519 [2024-07-25 11:47:03.348893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:04.519 [2024-07-25 11:47:03.348909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.202 ms 00:22:04.519 [2024-07-25 11:47:03.348940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.519 [2024-07-25 11:47:03.360875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.519 [2024-07-25 11:47:03.360932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:04.519 [2024-07-25 11:47:03.360949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.810 ms 00:22:04.519 [2024-07-25 11:47:03.360964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.519 [2024-07-25 11:47:03.372809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.519 [2024-07-25 11:47:03.372861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:04.519 [2024-07-25 11:47:03.372878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.703 ms 00:22:04.519 [2024-07-25 11:47:03.372892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.519 [2024-07-25 11:47:03.372966] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:04.519 [2024-07-25 11:47:03.372997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:04.519 [2024-07-25 11:47:03.373949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.373963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.373980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.373993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:04.520 [2024-07-25 11:47:03.374429] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:04.520 [2024-07-25 11:47:03.374444] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d38b5acd-953a-4711-aafb-f6576962b114 00:22:04.520 [2024-07-25 11:47:03.374473] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:04.520 [2024-07-25 11:47:03.374484] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:04.520 [2024-07-25 11:47:03.374498] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:04.520 [2024-07-25 11:47:03.374510] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:04.520 [2024-07-25 11:47:03.374524] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:04.520 [2024-07-25 11:47:03.374537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:04.520 [2024-07-25 11:47:03.374551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:04.520 [2024-07-25 11:47:03.374561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:04.520 [2024-07-25 11:47:03.374588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:04.520 [2024-07-25 11:47:03.374600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.520 [2024-07-25 11:47:03.374615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:04.520 [2024-07-25 11:47:03.374628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:22:04.520 [2024-07-25 11:47:03.374645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.520 [2024-07-25 11:47:03.391700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.520 [2024-07-25 11:47:03.391755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:04.520 [2024-07-25 11:47:03.391772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.012 ms 00:22:04.520 [2024-07-25 11:47:03.391802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.520 [2024-07-25 11:47:03.392374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.520 [2024-07-25 11:47:03.392422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:04.520 [2024-07-25 11:47:03.392441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:22:04.520 [2024-07-25 11:47:03.392455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.520 [2024-07-25 11:47:03.448903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.520 [2024-07-25 11:47:03.448970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:04.520 [2024-07-25 11:47:03.448989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.520 [2024-07-25 11:47:03.449004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.520 [2024-07-25 11:47:03.449139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.520 [2024-07-25 11:47:03.449172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:04.520 [2024-07-25 11:47:03.449191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.520 [2024-07-25 11:47:03.449216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.520 [2024-07-25 11:47:03.449286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.520 [2024-07-25 11:47:03.449310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:04.520 [2024-07-25 11:47:03.449323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.520 [2024-07-25 11:47:03.449341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.520 [2024-07-25 11:47:03.449368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.520 [2024-07-25 11:47:03.449386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:04.520 [2024-07-25 11:47:03.449399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.520 [2024-07-25 11:47:03.449416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.520 [2024-07-25 11:47:03.553583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.520 [2024-07-25 11:47:03.553692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.520 [2024-07-25 11:47:03.553725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.520 [2024-07-25 11:47:03.553743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.778 [2024-07-25 11:47:03.639575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.778 [2024-07-25 11:47:03.639666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:04.778 [2024-07-25 11:47:03.639691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.779 [2024-07-25 11:47:03.639707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.779 [2024-07-25 11:47:03.639843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.779 [2024-07-25 11:47:03.639868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:04.779 [2024-07-25 11:47:03.639881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.779 [2024-07-25 11:47:03.639899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.779 [2024-07-25 11:47:03.639966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.779 [2024-07-25 11:47:03.639988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:04.779 [2024-07-25 11:47:03.640002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.779 [2024-07-25 11:47:03.640017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.779 [2024-07-25 11:47:03.640169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.779 [2024-07-25 11:47:03.640194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:04.779 [2024-07-25 11:47:03.640208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.779 [2024-07-25 11:47:03.640222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.779 [2024-07-25 11:47:03.640288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.779 [2024-07-25 11:47:03.640313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:04.779 [2024-07-25 11:47:03.640326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.779 [2024-07-25 11:47:03.640340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.779 [2024-07-25 11:47:03.640403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.779 [2024-07-25 11:47:03.640423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:04.779 [2024-07-25 11:47:03.640436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.779 [2024-07-25 11:47:03.640453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.779 [2024-07-25 11:47:03.640515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.779 [2024-07-25 11:47:03.640537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:04.779 [2024-07-25 11:47:03.640550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.779 [2024-07-25 11:47:03.640564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.779 [2024-07-25 11:47:03.640753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 358.204 ms, result 0 00:22:05.713 11:47:04 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:05.713 [2024-07-25 11:47:04.755122] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:05.713 [2024-07-25 11:47:04.755306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80395 ] 00:22:05.971 [2024-07-25 11:47:04.928502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.228 [2024-07-25 11:47:05.170456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.485 [2024-07-25 11:47:05.522570] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.485 [2024-07-25 11:47:05.522675] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.744 [2024-07-25 11:47:05.690482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.744 [2024-07-25 11:47:05.690554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:06.744 [2024-07-25 11:47:05.690578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:06.744 [2024-07-25 11:47:05.690592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.744 [2024-07-25 11:47:05.694166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.744 [2024-07-25 11:47:05.694214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.744 [2024-07-25 11:47:05.694232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.544 ms 00:22:06.744 [2024-07-25 11:47:05.694245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.744 [2024-07-25 11:47:05.694383] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:06.744 [2024-07-25 11:47:05.695323] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:06.744 [2024-07-25 11:47:05.695365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.744 [2024-07-25 11:47:05.695380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.744 [2024-07-25 11:47:05.695395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:22:06.744 [2024-07-25 11:47:05.695407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.744 [2024-07-25 11:47:05.697493] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:06.744 [2024-07-25 11:47:05.714384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.744 [2024-07-25 11:47:05.714431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:06.744 [2024-07-25 11:47:05.714456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.892 ms 00:22:06.744 [2024-07-25 11:47:05.714470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.744 [2024-07-25 11:47:05.714597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.744 [2024-07-25 11:47:05.714619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:06.744 [2024-07-25 11:47:05.714643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:06.744 [2024-07-25 11:47:05.714655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.744 [2024-07-25 11:47:05.723241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.744 [2024-07-25 11:47:05.723287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.744 [2024-07-25 11:47:05.723305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.512 ms 00:22:06.744 [2024-07-25 11:47:05.723319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.744 [2024-07-25 11:47:05.723474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.744 [2024-07-25 11:47:05.723498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.744 [2024-07-25 11:47:05.723515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:06.744 [2024-07-25 11:47:05.723528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.744 [2024-07-25 11:47:05.723582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.744 [2024-07-25 11:47:05.723601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:06.744 [2024-07-25 11:47:05.723620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:06.745 [2024-07-25 11:47:05.723633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.745 [2024-07-25 11:47:05.723676] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:06.745 [2024-07-25 11:47:05.728720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.745 [2024-07-25 11:47:05.728762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.745 [2024-07-25 11:47:05.728779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.057 ms 00:22:06.745 [2024-07-25 11:47:05.728791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.745 [2024-07-25 11:47:05.728893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.745 [2024-07-25 11:47:05.728913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:06.745 [2024-07-25 11:47:05.728943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:06.745 [2024-07-25 11:47:05.728956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.745 [2024-07-25 11:47:05.728992] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:06.745 [2024-07-25 11:47:05.729027] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:06.745 [2024-07-25 11:47:05.729083] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:06.745 [2024-07-25 11:47:05.729106] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:06.745 [2024-07-25 11:47:05.729216] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:06.745 [2024-07-25 11:47:05.729233] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:06.745 [2024-07-25 11:47:05.729250] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:06.745 [2024-07-25 11:47:05.729267] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:06.745 [2024-07-25 11:47:05.729282] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:06.745 [2024-07-25 11:47:05.729302] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:06.745 [2024-07-25 11:47:05.729314] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:06.745 [2024-07-25 11:47:05.729327] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:06.745 [2024-07-25 11:47:05.729339] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:06.745 [2024-07-25 11:47:05.729353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.745 [2024-07-25 11:47:05.729366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:06.745 [2024-07-25 11:47:05.729379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:22:06.745 [2024-07-25 11:47:05.729391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.745 [2024-07-25 11:47:05.729491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.745 [2024-07-25 11:47:05.729508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:06.745 [2024-07-25 11:47:05.729527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:06.745 [2024-07-25 11:47:05.729539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.745 [2024-07-25 11:47:05.729654] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:06.745 [2024-07-25 11:47:05.729695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:06.745 [2024-07-25 11:47:05.729711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.745 [2024-07-25 11:47:05.729724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.745 [2024-07-25 11:47:05.729737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:06.745 [2024-07-25 11:47:05.729749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:06.745 [2024-07-25 11:47:05.729761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:06.745 [2024-07-25 11:47:05.729772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:06.745 [2024-07-25 11:47:05.729783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:06.745 [2024-07-25 11:47:05.729795] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.745 [2024-07-25 11:47:05.729806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:06.745 [2024-07-25 11:47:05.729817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:06.745 [2024-07-25 11:47:05.729828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.745 [2024-07-25 11:47:05.729840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:06.745 [2024-07-25 11:47:05.729851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:06.745 [2024-07-25 11:47:05.729864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.745 [2024-07-25 11:47:05.729877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:06.745 [2024-07-25 11:47:05.729889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:06.745 [2024-07-25 11:47:05.729937] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.745 [2024-07-25 11:47:05.729954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:06.745 [2024-07-25 11:47:05.729967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:06.745 [2024-07-25 11:47:05.729979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.745 [2024-07-25 11:47:05.729991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:06.745 [2024-07-25 11:47:05.730003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:06.745 [2024-07-25 11:47:05.730015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.745 [2024-07-25 11:47:05.730026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:06.745 [2024-07-25 11:47:05.730039] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:06.745 [2024-07-25 11:47:05.730050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.745 [2024-07-25 11:47:05.730065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:06.745 [2024-07-25 11:47:05.730077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:06.745 [2024-07-25 11:47:05.730088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.745 [2024-07-25 11:47:05.730099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:06.745 [2024-07-25 11:47:05.730111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:06.745 [2024-07-25 11:47:05.730122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.745 [2024-07-25 11:47:05.730134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:06.745 [2024-07-25 11:47:05.730145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:06.745 [2024-07-25 11:47:05.730157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.745 [2024-07-25 11:47:05.730168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:06.745 [2024-07-25 11:47:05.730180] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:06.745 [2024-07-25 11:47:05.730192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.745 [2024-07-25 11:47:05.730204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:06.745 [2024-07-25 11:47:05.730216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:06.745 [2024-07-25 11:47:05.730228] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.745 [2024-07-25 11:47:05.730239] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:06.745 [2024-07-25 11:47:05.730251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:06.745 [2024-07-25 11:47:05.730264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.745 [2024-07-25 11:47:05.730277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.745 [2024-07-25 11:47:05.730305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:06.745 [2024-07-25 11:47:05.730318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:06.745 [2024-07-25 11:47:05.730330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:06.745 [2024-07-25 11:47:05.730342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:06.745 [2024-07-25 11:47:05.730353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:06.745 [2024-07-25 11:47:05.730365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:06.745 [2024-07-25 11:47:05.730379] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:06.745 [2024-07-25 11:47:05.730394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.745 [2024-07-25 11:47:05.730409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:06.745 [2024-07-25 11:47:05.730422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:06.745 [2024-07-25 11:47:05.730435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:06.745 [2024-07-25 11:47:05.730448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:06.745 [2024-07-25 11:47:05.730467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:06.745 [2024-07-25 11:47:05.730480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:06.745 [2024-07-25 11:47:05.730493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:06.745 [2024-07-25 11:47:05.730507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:06.745 [2024-07-25 11:47:05.730519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:06.745 [2024-07-25 11:47:05.730531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:06.745 [2024-07-25 11:47:05.730544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:06.745 [2024-07-25 11:47:05.730557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:06.745 [2024-07-25 11:47:05.730569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:06.746 [2024-07-25 11:47:05.730582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:06.746 [2024-07-25 11:47:05.730594] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:06.746 [2024-07-25 11:47:05.730608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.746 [2024-07-25 11:47:05.730622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.746 [2024-07-25 11:47:05.730635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:06.746 [2024-07-25 11:47:05.730648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:06.746 [2024-07-25 11:47:05.730661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:06.746 [2024-07-25 11:47:05.730674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.746 [2024-07-25 11:47:05.730688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:06.746 [2024-07-25 11:47:05.730701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:22:06.746 [2024-07-25 11:47:05.730714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.746 [2024-07-25 11:47:05.779253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.746 [2024-07-25 11:47:05.779321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.746 [2024-07-25 11:47:05.779350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.444 ms 00:22:06.746 [2024-07-25 11:47:05.779364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.746 [2024-07-25 11:47:05.779614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.746 [2024-07-25 11:47:05.779657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.746 [2024-07-25 11:47:05.779682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:06.746 [2024-07-25 11:47:05.779695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:05.823842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:05.823914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.004 [2024-07-25 11:47:05.823949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.106 ms 00:22:07.004 [2024-07-25 11:47:05.823963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:05.824229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:05.824284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.004 [2024-07-25 11:47:05.824314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:07.004 [2024-07-25 11:47:05.824339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:05.825068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:05.825104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.004 [2024-07-25 11:47:05.825121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:22:07.004 [2024-07-25 11:47:05.825135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:05.825418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:05.825476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.004 [2024-07-25 11:47:05.825503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:22:07.004 [2024-07-25 11:47:05.825525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:05.847244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:05.847311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.004 [2024-07-25 11:47:05.847338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.668 ms 00:22:07.004 [2024-07-25 11:47:05.847355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:05.869102] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:07.004 [2024-07-25 11:47:05.869175] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:07.004 [2024-07-25 11:47:05.869204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:05.869223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:07.004 [2024-07-25 11:47:05.869246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.591 ms 00:22:07.004 [2024-07-25 11:47:05.869263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:05.905294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:05.905394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:07.004 [2024-07-25 11:47:05.905424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.808 ms 00:22:07.004 [2024-07-25 11:47:05.905442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:05.926557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:05.926636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:07.004 [2024-07-25 11:47:05.926664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.905 ms 00:22:07.004 [2024-07-25 11:47:05.926681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:05.946880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:05.946959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:07.004 [2024-07-25 11:47:05.946986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.914 ms 00:22:07.004 [2024-07-25 11:47:05.947003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:05.948262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:05.948319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:07.004 [2024-07-25 11:47:05.948340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:22:07.004 [2024-07-25 11:47:05.948357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.004 [2024-07-25 11:47:06.043647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.004 [2024-07-25 11:47:06.043737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:07.004 [2024-07-25 11:47:06.043762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.240 ms 00:22:07.004 [2024-07-25 11:47:06.043777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.262 [2024-07-25 11:47:06.058891] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:07.262 [2024-07-25 11:47:06.082161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.262 [2024-07-25 11:47:06.082273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:07.262 [2024-07-25 11:47:06.082305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.113 ms 00:22:07.262 [2024-07-25 11:47:06.082324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.262 [2024-07-25 11:47:06.082568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.262 [2024-07-25 11:47:06.082599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:07.262 [2024-07-25 11:47:06.082620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:07.262 [2024-07-25 11:47:06.082637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.262 [2024-07-25 11:47:06.082746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.262 [2024-07-25 11:47:06.082787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:07.262 [2024-07-25 11:47:06.082808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:07.262 [2024-07-25 11:47:06.082824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.262 [2024-07-25 11:47:06.082876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.262 [2024-07-25 11:47:06.082905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:07.262 [2024-07-25 11:47:06.082956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:07.262 [2024-07-25 11:47:06.082976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.262 [2024-07-25 11:47:06.083040] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:07.262 [2024-07-25 11:47:06.083065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.262 [2024-07-25 11:47:06.083082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:07.262 [2024-07-25 11:47:06.083099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:07.262 [2024-07-25 11:47:06.083116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.262 [2024-07-25 11:47:06.124784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.262 [2024-07-25 11:47:06.124905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:07.262 [2024-07-25 11:47:06.124948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.617 ms 00:22:07.262 [2024-07-25 11:47:06.124968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.262 [2024-07-25 11:47:06.125228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.262 [2024-07-25 11:47:06.125267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:07.262 [2024-07-25 11:47:06.125289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:07.262 [2024-07-25 11:47:06.125306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.262 [2024-07-25 11:47:06.126876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:07.262 [2024-07-25 11:47:06.131348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 435.898 ms, result 0 00:22:07.262 [2024-07-25 11:47:06.132140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:07.262 [2024-07-25 11:47:06.149771] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:17.822  Copying: 27/256 [MB] (27 MBps) Copying: 51/256 [MB] (24 MBps) Copying: 74/256 [MB] (23 MBps) Copying: 99/256 [MB] (24 MBps) Copying: 124/256 [MB] (24 MBps) Copying: 148/256 [MB] (24 MBps) Copying: 173/256 [MB] (24 MBps) Copying: 198/256 [MB] (24 MBps) Copying: 223/256 [MB] (25 MBps) Copying: 248/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-25 11:47:16.643366] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:17.822 [2024-07-25 11:47:16.658754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.658831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:17.822 [2024-07-25 11:47:16.658870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:17.822 [2024-07-25 11:47:16.658884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.658930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:17.822 [2024-07-25 11:47:16.662555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.662605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:17.822 [2024-07-25 11:47:16.662636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.588 ms 00:22:17.822 [2024-07-25 11:47:16.662664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.662997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.663018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:17.822 [2024-07-25 11:47:16.663033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:22:17.822 [2024-07-25 11:47:16.663046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.666798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.666847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:17.822 [2024-07-25 11:47:16.666885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:22:17.822 [2024-07-25 11:47:16.666898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.674700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.674749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:17.822 [2024-07-25 11:47:16.674778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.776 ms 00:22:17.822 [2024-07-25 11:47:16.674790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.704579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.704657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:17.822 [2024-07-25 11:47:16.704675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.714 ms 00:22:17.822 [2024-07-25 11:47:16.704688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.723204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.723264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:17.822 [2024-07-25 11:47:16.723282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.450 ms 00:22:17.822 [2024-07-25 11:47:16.723302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.723506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.723528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:17.822 [2024-07-25 11:47:16.723542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:22:17.822 [2024-07-25 11:47:16.723554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.754392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.754451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:17.822 [2024-07-25 11:47:16.754482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.812 ms 00:22:17.822 [2024-07-25 11:47:16.754494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.784438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.784498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:17.822 [2024-07-25 11:47:16.784515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.877 ms 00:22:17.822 [2024-07-25 11:47:16.784527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.813637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.813711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:17.822 [2024-07-25 11:47:16.813728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.048 ms 00:22:17.822 [2024-07-25 11:47:16.813740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.842542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.822 [2024-07-25 11:47:16.842599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:17.822 [2024-07-25 11:47:16.842630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.702 ms 00:22:17.822 [2024-07-25 11:47:16.842642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.822 [2024-07-25 11:47:16.842728] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:17.822 [2024-07-25 11:47:16.842761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.842997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:17.822 [2024-07-25 11:47:16.843310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.843999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.844012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.844025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.844038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.844051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.844064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.844077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.844090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.844103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:17.823 [2024-07-25 11:47:16.844126] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:17.823 [2024-07-25 11:47:16.844139] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d38b5acd-953a-4711-aafb-f6576962b114 00:22:17.823 [2024-07-25 11:47:16.844153] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:17.823 [2024-07-25 11:47:16.844165] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:17.823 [2024-07-25 11:47:16.844191] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:17.823 [2024-07-25 11:47:16.844204] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:17.823 [2024-07-25 11:47:16.844216] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:17.823 [2024-07-25 11:47:16.844229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:17.823 [2024-07-25 11:47:16.844241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:17.823 [2024-07-25 11:47:16.844252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:17.823 [2024-07-25 11:47:16.844263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:17.823 [2024-07-25 11:47:16.844287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.823 [2024-07-25 11:47:16.844302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:17.823 [2024-07-25 11:47:16.844322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:22:17.823 [2024-07-25 11:47:16.844335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.823 [2024-07-25 11:47:16.860595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.823 [2024-07-25 11:47:16.860682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:17.823 [2024-07-25 11:47:16.860715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.232 ms 00:22:17.823 [2024-07-25 11:47:16.860728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.823 [2024-07-25 11:47:16.861260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.823 [2024-07-25 11:47:16.861300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:17.823 [2024-07-25 11:47:16.861316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:22:17.823 [2024-07-25 11:47:16.861328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:16.901481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:16.901551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.083 [2024-07-25 11:47:16.901583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:16.901596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:16.901717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:16.901741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.083 [2024-07-25 11:47:16.901755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:16.901767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:16.901840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:16.901861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.083 [2024-07-25 11:47:16.901875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:16.901887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:16.901915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:16.901930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.083 [2024-07-25 11:47:16.901965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:16.901977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:17.007561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:17.007653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.083 [2024-07-25 11:47:17.007674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:17.007688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:17.094519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:17.094595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.083 [2024-07-25 11:47:17.094616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:17.094630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:17.094743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:17.094762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.083 [2024-07-25 11:47:17.094776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:17.094789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:17.094831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:17.094846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.083 [2024-07-25 11:47:17.094860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:17.094880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:17.095032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:17.095054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.083 [2024-07-25 11:47:17.095069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:17.095082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:17.095141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:17.095177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:18.083 [2024-07-25 11:47:17.095192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:17.095205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:17.095273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:17.095290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.083 [2024-07-25 11:47:17.095303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:17.095315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:17.095381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.083 [2024-07-25 11:47:17.095400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.083 [2024-07-25 11:47:17.095413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.083 [2024-07-25 11:47:17.095432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.083 [2024-07-25 11:47:17.095634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 436.863 ms, result 0 00:22:19.458 00:22:19.458 00:22:19.458 11:47:18 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:20.025 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:20.025 11:47:18 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:20.025 11:47:18 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:20.025 11:47:18 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:20.025 11:47:18 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:20.025 11:47:18 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:20.025 11:47:18 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:20.025 11:47:18 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 80327 00:22:20.025 11:47:18 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 80327 ']' 00:22:20.025 11:47:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 80327 00:22:20.025 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80327) - No such process 00:22:20.025 Process with pid 80327 is not found 00:22:20.025 11:47:18 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 80327 is not found' 00:22:20.025 00:22:20.025 real 1m13.147s 00:22:20.025 user 1m38.566s 00:22:20.025 sys 0m8.305s 00:22:20.025 11:47:18 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.025 11:47:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:20.025 ************************************ 00:22:20.025 END TEST ftl_trim 00:22:20.025 ************************************ 00:22:20.025 11:47:18 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:20.025 11:47:18 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:20.025 11:47:18 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.025 11:47:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:20.025 ************************************ 00:22:20.025 START TEST ftl_restore 00:22:20.025 ************************************ 00:22:20.025 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:20.025 * Looking for test storage... 00:22:20.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HEKtOV9GZZ 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80598 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.025 11:47:19 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80598 00:22:20.025 11:47:19 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 80598 ']' 00:22:20.025 11:47:19 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.025 11:47:19 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:20.025 11:47:19 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.025 11:47:19 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:20.025 11:47:19 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:20.283 [2024-07-25 11:47:19.192627] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:20.283 [2024-07-25 11:47:19.192828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80598 ] 00:22:20.540 [2024-07-25 11:47:19.366709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.797 [2024-07-25 11:47:19.611350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.730 11:47:20 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.730 11:47:20 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:22:21.730 11:47:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:21.730 11:47:20 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:21.730 11:47:20 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:21.730 11:47:20 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:21.730 11:47:20 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:21.730 11:47:20 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:21.987 11:47:20 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:21.987 11:47:20 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:21.987 11:47:20 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:21.987 11:47:20 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:21.987 11:47:20 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:21.987 11:47:20 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:21.987 11:47:20 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:21.987 11:47:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:22.244 11:47:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:22.244 { 00:22:22.244 "name": "nvme0n1", 00:22:22.244 "aliases": [ 00:22:22.244 "784659f5-fc54-4d95-94fc-1f4e89b89a0f" 00:22:22.244 ], 00:22:22.244 "product_name": "NVMe disk", 00:22:22.244 "block_size": 4096, 00:22:22.244 "num_blocks": 1310720, 00:22:22.244 "uuid": "784659f5-fc54-4d95-94fc-1f4e89b89a0f", 00:22:22.244 "assigned_rate_limits": { 00:22:22.244 "rw_ios_per_sec": 0, 00:22:22.244 "rw_mbytes_per_sec": 0, 00:22:22.244 "r_mbytes_per_sec": 0, 00:22:22.244 "w_mbytes_per_sec": 0 00:22:22.244 }, 00:22:22.244 "claimed": true, 00:22:22.244 "claim_type": "read_many_write_one", 00:22:22.244 "zoned": false, 00:22:22.244 "supported_io_types": { 00:22:22.244 "read": true, 00:22:22.244 "write": true, 00:22:22.244 "unmap": true, 00:22:22.244 "flush": true, 00:22:22.244 "reset": true, 00:22:22.244 "nvme_admin": true, 00:22:22.244 "nvme_io": true, 00:22:22.244 "nvme_io_md": false, 00:22:22.244 "write_zeroes": true, 00:22:22.244 "zcopy": false, 00:22:22.244 "get_zone_info": false, 00:22:22.244 "zone_management": false, 00:22:22.244 "zone_append": false, 00:22:22.244 "compare": true, 00:22:22.244 "compare_and_write": false, 00:22:22.244 "abort": true, 00:22:22.244 "seek_hole": false, 00:22:22.244 "seek_data": false, 00:22:22.244 "copy": true, 00:22:22.244 "nvme_iov_md": false 00:22:22.244 }, 00:22:22.244 "driver_specific": { 00:22:22.244 "nvme": [ 00:22:22.244 { 00:22:22.244 "pci_address": "0000:00:11.0", 00:22:22.244 "trid": { 00:22:22.244 "trtype": "PCIe", 00:22:22.244 "traddr": "0000:00:11.0" 00:22:22.244 }, 00:22:22.244 "ctrlr_data": { 00:22:22.244 "cntlid": 0, 00:22:22.244 "vendor_id": "0x1b36", 00:22:22.244 "model_number": "QEMU NVMe Ctrl", 00:22:22.244 "serial_number": "12341", 00:22:22.244 "firmware_revision": "8.0.0", 00:22:22.244 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:22.244 "oacs": { 00:22:22.244 "security": 0, 00:22:22.244 "format": 1, 00:22:22.244 "firmware": 0, 00:22:22.244 "ns_manage": 1 00:22:22.244 }, 00:22:22.244 "multi_ctrlr": false, 00:22:22.244 "ana_reporting": false 00:22:22.244 }, 00:22:22.244 "vs": { 00:22:22.244 "nvme_version": "1.4" 00:22:22.244 }, 00:22:22.244 "ns_data": { 00:22:22.244 "id": 1, 00:22:22.244 "can_share": false 00:22:22.244 } 00:22:22.244 } 00:22:22.244 ], 00:22:22.244 "mp_policy": "active_passive" 00:22:22.244 } 00:22:22.244 } 00:22:22.244 ]' 00:22:22.244 11:47:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:22.244 11:47:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:22.244 11:47:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:22.244 11:47:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:22.244 11:47:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:22.244 11:47:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:22:22.244 11:47:21 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:22.244 11:47:21 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:22.244 11:47:21 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:22.244 11:47:21 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:22.244 11:47:21 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:22.502 11:47:21 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=ddd29850-7db0-41db-8623-cdc7e6e15206 00:22:22.502 11:47:21 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:22.502 11:47:21 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ddd29850-7db0-41db-8623-cdc7e6e15206 00:22:22.759 11:47:21 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:23.017 11:47:22 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=5ed8456d-e21d-4baa-8b7c-2aee8b1f8c77 00:22:23.017 11:47:22 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5ed8456d-e21d-4baa-8b7c-2aee8b1f8c77 00:22:23.582 11:47:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:23.582 11:47:22 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:23.582 11:47:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:23.582 11:47:22 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:23.582 11:47:22 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:23.582 11:47:22 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:23.582 11:47:22 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:23.582 11:47:22 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:23.582 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:23.582 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:23.582 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:23.582 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:23.582 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:23.582 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:23.582 { 00:22:23.582 "name": "da015042-bc31-4618-98a0-5ed07aaa3a27", 00:22:23.582 "aliases": [ 00:22:23.582 "lvs/nvme0n1p0" 00:22:23.582 ], 00:22:23.582 "product_name": "Logical Volume", 00:22:23.582 "block_size": 4096, 00:22:23.582 "num_blocks": 26476544, 00:22:23.582 "uuid": "da015042-bc31-4618-98a0-5ed07aaa3a27", 00:22:23.582 "assigned_rate_limits": { 00:22:23.582 "rw_ios_per_sec": 0, 00:22:23.582 "rw_mbytes_per_sec": 0, 00:22:23.582 "r_mbytes_per_sec": 0, 00:22:23.582 "w_mbytes_per_sec": 0 00:22:23.582 }, 00:22:23.582 "claimed": false, 00:22:23.582 "zoned": false, 00:22:23.583 "supported_io_types": { 00:22:23.583 "read": true, 00:22:23.583 "write": true, 00:22:23.583 "unmap": true, 00:22:23.583 "flush": false, 00:22:23.583 "reset": true, 00:22:23.583 "nvme_admin": false, 00:22:23.583 "nvme_io": false, 00:22:23.583 "nvme_io_md": false, 00:22:23.583 "write_zeroes": true, 00:22:23.583 "zcopy": false, 00:22:23.583 "get_zone_info": false, 00:22:23.583 "zone_management": false, 00:22:23.583 "zone_append": false, 00:22:23.583 "compare": false, 00:22:23.583 "compare_and_write": false, 00:22:23.583 "abort": false, 00:22:23.583 "seek_hole": true, 00:22:23.583 "seek_data": true, 00:22:23.583 "copy": false, 00:22:23.583 "nvme_iov_md": false 00:22:23.583 }, 00:22:23.583 "driver_specific": { 00:22:23.583 "lvol": { 00:22:23.583 "lvol_store_uuid": "5ed8456d-e21d-4baa-8b7c-2aee8b1f8c77", 00:22:23.583 "base_bdev": "nvme0n1", 00:22:23.583 "thin_provision": true, 00:22:23.583 "num_allocated_clusters": 0, 00:22:23.583 "snapshot": false, 00:22:23.583 "clone": false, 00:22:23.583 "esnap_clone": false 00:22:23.583 } 00:22:23.583 } 00:22:23.583 } 00:22:23.583 ]' 00:22:23.583 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:23.840 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:23.840 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:23.840 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:23.840 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:23.840 11:47:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:23.840 11:47:22 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:23.840 11:47:22 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:23.840 11:47:22 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:24.117 11:47:23 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:24.117 11:47:23 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:24.117 11:47:23 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:24.117 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:24.117 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:24.117 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:24.117 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:24.117 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:24.404 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:24.404 { 00:22:24.404 "name": "da015042-bc31-4618-98a0-5ed07aaa3a27", 00:22:24.404 "aliases": [ 00:22:24.404 "lvs/nvme0n1p0" 00:22:24.404 ], 00:22:24.404 "product_name": "Logical Volume", 00:22:24.404 "block_size": 4096, 00:22:24.404 "num_blocks": 26476544, 00:22:24.404 "uuid": "da015042-bc31-4618-98a0-5ed07aaa3a27", 00:22:24.404 "assigned_rate_limits": { 00:22:24.404 "rw_ios_per_sec": 0, 00:22:24.404 "rw_mbytes_per_sec": 0, 00:22:24.404 "r_mbytes_per_sec": 0, 00:22:24.404 "w_mbytes_per_sec": 0 00:22:24.404 }, 00:22:24.404 "claimed": false, 00:22:24.404 "zoned": false, 00:22:24.404 "supported_io_types": { 00:22:24.404 "read": true, 00:22:24.404 "write": true, 00:22:24.404 "unmap": true, 00:22:24.404 "flush": false, 00:22:24.404 "reset": true, 00:22:24.404 "nvme_admin": false, 00:22:24.404 "nvme_io": false, 00:22:24.404 "nvme_io_md": false, 00:22:24.404 "write_zeroes": true, 00:22:24.404 "zcopy": false, 00:22:24.404 "get_zone_info": false, 00:22:24.404 "zone_management": false, 00:22:24.404 "zone_append": false, 00:22:24.404 "compare": false, 00:22:24.404 "compare_and_write": false, 00:22:24.404 "abort": false, 00:22:24.404 "seek_hole": true, 00:22:24.404 "seek_data": true, 00:22:24.404 "copy": false, 00:22:24.404 "nvme_iov_md": false 00:22:24.404 }, 00:22:24.404 "driver_specific": { 00:22:24.404 "lvol": { 00:22:24.404 "lvol_store_uuid": "5ed8456d-e21d-4baa-8b7c-2aee8b1f8c77", 00:22:24.404 "base_bdev": "nvme0n1", 00:22:24.404 "thin_provision": true, 00:22:24.404 "num_allocated_clusters": 0, 00:22:24.404 "snapshot": false, 00:22:24.404 "clone": false, 00:22:24.404 "esnap_clone": false 00:22:24.404 } 00:22:24.404 } 00:22:24.404 } 00:22:24.404 ]' 00:22:24.404 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:24.404 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:24.404 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:24.666 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:24.666 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:24.666 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:24.666 11:47:23 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:24.666 11:47:23 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:24.666 11:47:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:24.666 11:47:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:24.666 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:24.666 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:24.666 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:24.666 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:24.666 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da015042-bc31-4618-98a0-5ed07aaa3a27 00:22:25.232 11:47:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:25.232 { 00:22:25.232 "name": "da015042-bc31-4618-98a0-5ed07aaa3a27", 00:22:25.232 "aliases": [ 00:22:25.233 "lvs/nvme0n1p0" 00:22:25.233 ], 00:22:25.233 "product_name": "Logical Volume", 00:22:25.233 "block_size": 4096, 00:22:25.233 "num_blocks": 26476544, 00:22:25.233 "uuid": "da015042-bc31-4618-98a0-5ed07aaa3a27", 00:22:25.233 "assigned_rate_limits": { 00:22:25.233 "rw_ios_per_sec": 0, 00:22:25.233 "rw_mbytes_per_sec": 0, 00:22:25.233 "r_mbytes_per_sec": 0, 00:22:25.233 "w_mbytes_per_sec": 0 00:22:25.233 }, 00:22:25.233 "claimed": false, 00:22:25.233 "zoned": false, 00:22:25.233 "supported_io_types": { 00:22:25.233 "read": true, 00:22:25.233 "write": true, 00:22:25.233 "unmap": true, 00:22:25.233 "flush": false, 00:22:25.233 "reset": true, 00:22:25.233 "nvme_admin": false, 00:22:25.233 "nvme_io": false, 00:22:25.233 "nvme_io_md": false, 00:22:25.233 "write_zeroes": true, 00:22:25.233 "zcopy": false, 00:22:25.233 "get_zone_info": false, 00:22:25.233 "zone_management": false, 00:22:25.233 "zone_append": false, 00:22:25.233 "compare": false, 00:22:25.233 "compare_and_write": false, 00:22:25.233 "abort": false, 00:22:25.233 "seek_hole": true, 00:22:25.233 "seek_data": true, 00:22:25.233 "copy": false, 00:22:25.233 "nvme_iov_md": false 00:22:25.233 }, 00:22:25.233 "driver_specific": { 00:22:25.233 "lvol": { 00:22:25.233 "lvol_store_uuid": "5ed8456d-e21d-4baa-8b7c-2aee8b1f8c77", 00:22:25.233 "base_bdev": "nvme0n1", 00:22:25.233 "thin_provision": true, 00:22:25.233 "num_allocated_clusters": 0, 00:22:25.233 "snapshot": false, 00:22:25.233 "clone": false, 00:22:25.233 "esnap_clone": false 00:22:25.233 } 00:22:25.233 } 00:22:25.233 } 00:22:25.233 ]' 00:22:25.233 11:47:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:25.233 11:47:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:25.233 11:47:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:25.233 11:47:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:25.233 11:47:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:25.233 11:47:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:25.233 11:47:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:25.233 11:47:24 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d da015042-bc31-4618-98a0-5ed07aaa3a27 --l2p_dram_limit 10' 00:22:25.233 11:47:24 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:25.233 11:47:24 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:25.233 11:47:24 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:25.233 11:47:24 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:25.233 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:25.233 11:47:24 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d da015042-bc31-4618-98a0-5ed07aaa3a27 --l2p_dram_limit 10 -c nvc0n1p0 00:22:25.492 [2024-07-25 11:47:24.326976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.492 [2024-07-25 11:47:24.327061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:25.492 [2024-07-25 11:47:24.327085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:25.492 [2024-07-25 11:47:24.327102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.492 [2024-07-25 11:47:24.327196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.492 [2024-07-25 11:47:24.327219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.492 [2024-07-25 11:47:24.327232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:25.492 [2024-07-25 11:47:24.327247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.492 [2024-07-25 11:47:24.327279] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:25.492 [2024-07-25 11:47:24.328403] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:25.492 [2024-07-25 11:47:24.328443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.492 [2024-07-25 11:47:24.328465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.492 [2024-07-25 11:47:24.328479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:22:25.492 [2024-07-25 11:47:24.328494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.493 [2024-07-25 11:47:24.328663] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c1430877-b154-4b5e-893a-2e52e6ce0696 00:22:25.493 [2024-07-25 11:47:24.330506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.493 [2024-07-25 11:47:24.330543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:25.493 [2024-07-25 11:47:24.330563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:25.493 [2024-07-25 11:47:24.330577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.493 [2024-07-25 11:47:24.340282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.493 [2024-07-25 11:47:24.340333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.493 [2024-07-25 11:47:24.340354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.594 ms 00:22:25.493 [2024-07-25 11:47:24.340367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.493 [2024-07-25 11:47:24.340507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.493 [2024-07-25 11:47:24.340528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.493 [2024-07-25 11:47:24.340545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:25.493 [2024-07-25 11:47:24.340557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.493 [2024-07-25 11:47:24.340648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.493 [2024-07-25 11:47:24.340667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:25.493 [2024-07-25 11:47:24.340687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:25.493 [2024-07-25 11:47:24.340699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.493 [2024-07-25 11:47:24.340761] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:25.493 [2024-07-25 11:47:24.346070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.493 [2024-07-25 11:47:24.346138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.493 [2024-07-25 11:47:24.346155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.325 ms 00:22:25.493 [2024-07-25 11:47:24.346170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.493 [2024-07-25 11:47:24.346219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.493 [2024-07-25 11:47:24.346239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:25.493 [2024-07-25 11:47:24.346251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:25.493 [2024-07-25 11:47:24.346265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.493 [2024-07-25 11:47:24.346310] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:25.493 [2024-07-25 11:47:24.346489] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:25.493 [2024-07-25 11:47:24.346519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:25.493 [2024-07-25 11:47:24.346543] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:25.493 [2024-07-25 11:47:24.346559] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:25.493 [2024-07-25 11:47:24.346578] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:25.493 [2024-07-25 11:47:24.346590] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:25.493 [2024-07-25 11:47:24.346610] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:25.493 [2024-07-25 11:47:24.346621] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:25.493 [2024-07-25 11:47:24.346644] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:25.493 [2024-07-25 11:47:24.346658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.493 [2024-07-25 11:47:24.346672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:25.493 [2024-07-25 11:47:24.346685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:22:25.493 [2024-07-25 11:47:24.346700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.493 [2024-07-25 11:47:24.346793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.493 [2024-07-25 11:47:24.346813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:25.493 [2024-07-25 11:47:24.346826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:25.493 [2024-07-25 11:47:24.346844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.493 [2024-07-25 11:47:24.346973] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:25.493 [2024-07-25 11:47:24.346999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:25.493 [2024-07-25 11:47:24.347026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:25.493 [2024-07-25 11:47:24.347042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:25.493 [2024-07-25 11:47:24.347068] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:25.493 [2024-07-25 11:47:24.347093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:25.493 [2024-07-25 11:47:24.347103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:25.493 [2024-07-25 11:47:24.347130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:25.493 [2024-07-25 11:47:24.347146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:25.493 [2024-07-25 11:47:24.347157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:25.493 [2024-07-25 11:47:24.347170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:25.493 [2024-07-25 11:47:24.347182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:25.493 [2024-07-25 11:47:24.347196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:25.493 [2024-07-25 11:47:24.347223] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:25.493 [2024-07-25 11:47:24.347234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:25.493 [2024-07-25 11:47:24.347259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.493 [2024-07-25 11:47:24.347283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:25.493 [2024-07-25 11:47:24.347297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347307] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.493 [2024-07-25 11:47:24.347321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:25.493 [2024-07-25 11:47:24.347332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.493 [2024-07-25 11:47:24.347356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:25.493 [2024-07-25 11:47:24.347369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.493 [2024-07-25 11:47:24.347393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:25.493 [2024-07-25 11:47:24.347404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:25.493 [2024-07-25 11:47:24.347431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:25.493 [2024-07-25 11:47:24.347445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:25.493 [2024-07-25 11:47:24.347456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:25.493 [2024-07-25 11:47:24.347471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:25.493 [2024-07-25 11:47:24.347482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:25.493 [2024-07-25 11:47:24.347496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:25.493 [2024-07-25 11:47:24.347521] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:25.493 [2024-07-25 11:47:24.347533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347547] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:25.493 [2024-07-25 11:47:24.347559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:25.493 [2024-07-25 11:47:24.347573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:25.493 [2024-07-25 11:47:24.347585] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.493 [2024-07-25 11:47:24.347600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:25.493 [2024-07-25 11:47:24.347611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:25.493 [2024-07-25 11:47:24.347627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:25.493 [2024-07-25 11:47:24.347638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:25.493 [2024-07-25 11:47:24.347652] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:25.493 [2024-07-25 11:47:24.347663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:25.493 [2024-07-25 11:47:24.347682] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:25.493 [2024-07-25 11:47:24.347700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:25.493 [2024-07-25 11:47:24.347717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:25.493 [2024-07-25 11:47:24.347729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:25.493 [2024-07-25 11:47:24.347743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:25.493 [2024-07-25 11:47:24.347755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:25.494 [2024-07-25 11:47:24.347769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:25.494 [2024-07-25 11:47:24.347781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:25.494 [2024-07-25 11:47:24.347796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:25.494 [2024-07-25 11:47:24.347809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:25.494 [2024-07-25 11:47:24.347823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:25.494 [2024-07-25 11:47:24.347834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:25.494 [2024-07-25 11:47:24.347852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:25.494 [2024-07-25 11:47:24.347864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:25.494 [2024-07-25 11:47:24.347878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:25.494 [2024-07-25 11:47:24.347891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:25.494 [2024-07-25 11:47:24.347905] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:25.494 [2024-07-25 11:47:24.347933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:25.494 [2024-07-25 11:47:24.347952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:25.494 [2024-07-25 11:47:24.347964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:25.494 [2024-07-25 11:47:24.347979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:25.494 [2024-07-25 11:47:24.347992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:25.494 [2024-07-25 11:47:24.348008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.494 [2024-07-25 11:47:24.348021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:25.494 [2024-07-25 11:47:24.348036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:22:25.494 [2024-07-25 11:47:24.348049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.494 [2024-07-25 11:47:24.348141] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:25.494 [2024-07-25 11:47:24.348171] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:28.023 [2024-07-25 11:47:26.862587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.023 [2024-07-25 11:47:26.862677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:28.023 [2024-07-25 11:47:26.862706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2514.428 ms 00:22:28.023 [2024-07-25 11:47:26.862720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.023 [2024-07-25 11:47:26.905410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.023 [2024-07-25 11:47:26.905480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.023 [2024-07-25 11:47:26.905506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.315 ms 00:22:28.023 [2024-07-25 11:47:26.905520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.023 [2024-07-25 11:47:26.905739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.023 [2024-07-25 11:47:26.905760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:28.023 [2024-07-25 11:47:26.905782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:28.023 [2024-07-25 11:47:26.905794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.023 [2024-07-25 11:47:26.951516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.023 [2024-07-25 11:47:26.951585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.023 [2024-07-25 11:47:26.951609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.654 ms 00:22:28.023 [2024-07-25 11:47:26.951623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.023 [2024-07-25 11:47:26.951700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.023 [2024-07-25 11:47:26.951717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:28.023 [2024-07-25 11:47:26.951740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:28.023 [2024-07-25 11:47:26.951752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.023 [2024-07-25 11:47:26.952446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.023 [2024-07-25 11:47:26.952476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:28.023 [2024-07-25 11:47:26.952495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:22:28.023 [2024-07-25 11:47:26.952508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.023 [2024-07-25 11:47:26.952686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.023 [2024-07-25 11:47:26.952724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:28.023 [2024-07-25 11:47:26.952742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:22:28.023 [2024-07-25 11:47:26.952754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.023 [2024-07-25 11:47:26.974172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.023 [2024-07-25 11:47:26.974225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:28.023 [2024-07-25 11:47:26.974249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.380 ms 00:22:28.023 [2024-07-25 11:47:26.974270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.023 [2024-07-25 11:47:26.989415] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:28.023 [2024-07-25 11:47:26.993624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.023 [2024-07-25 11:47:26.993666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:28.023 [2024-07-25 11:47:26.993694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.203 ms 00:22:28.023 [2024-07-25 11:47:26.993709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.282 [2024-07-25 11:47:27.080120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.282 [2024-07-25 11:47:27.080238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:28.282 [2024-07-25 11:47:27.080262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.345 ms 00:22:28.282 [2024-07-25 11:47:27.080289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.282 [2024-07-25 11:47:27.080549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.282 [2024-07-25 11:47:27.080580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:28.282 [2024-07-25 11:47:27.080596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:22:28.282 [2024-07-25 11:47:27.080625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.282 [2024-07-25 11:47:27.114185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.282 [2024-07-25 11:47:27.114247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:28.282 [2024-07-25 11:47:27.114267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.488 ms 00:22:28.282 [2024-07-25 11:47:27.114302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.282 [2024-07-25 11:47:27.147041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.282 [2024-07-25 11:47:27.147087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:28.282 [2024-07-25 11:47:27.147105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.667 ms 00:22:28.282 [2024-07-25 11:47:27.147121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.282 [2024-07-25 11:47:27.148053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.282 [2024-07-25 11:47:27.148089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:28.282 [2024-07-25 11:47:27.148108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:22:28.282 [2024-07-25 11:47:27.148138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.282 [2024-07-25 11:47:27.244492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.282 [2024-07-25 11:47:27.244561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:28.282 [2024-07-25 11:47:27.244583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.259 ms 00:22:28.282 [2024-07-25 11:47:27.244615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.282 [2024-07-25 11:47:27.277532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.282 [2024-07-25 11:47:27.277580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:28.282 [2024-07-25 11:47:27.277600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.848 ms 00:22:28.282 [2024-07-25 11:47:27.277616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.282 [2024-07-25 11:47:27.310506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.282 [2024-07-25 11:47:27.310554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:28.282 [2024-07-25 11:47:27.310571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.836 ms 00:22:28.282 [2024-07-25 11:47:27.310585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.560 [2024-07-25 11:47:27.343282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.560 [2024-07-25 11:47:27.343329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:28.560 [2024-07-25 11:47:27.343347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.647 ms 00:22:28.560 [2024-07-25 11:47:27.343362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.560 [2024-07-25 11:47:27.343426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.560 [2024-07-25 11:47:27.343450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:28.560 [2024-07-25 11:47:27.343465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:28.560 [2024-07-25 11:47:27.343484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.560 [2024-07-25 11:47:27.343614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.560 [2024-07-25 11:47:27.343642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:28.560 [2024-07-25 11:47:27.343656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:28.560 [2024-07-25 11:47:27.343671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.560 [2024-07-25 11:47:27.345173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3017.499 ms, result 0 00:22:28.560 { 00:22:28.560 "name": "ftl0", 00:22:28.560 "uuid": "c1430877-b154-4b5e-893a-2e52e6ce0696" 00:22:28.560 } 00:22:28.560 11:47:27 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:28.560 11:47:27 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:28.818 11:47:27 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:28.818 11:47:27 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:29.077 [2024-07-25 11:47:27.876315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.077 [2024-07-25 11:47:27.876378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:29.077 [2024-07-25 11:47:27.876412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:29.077 [2024-07-25 11:47:27.876426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.077 [2024-07-25 11:47:27.876470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:29.077 [2024-07-25 11:47:27.880218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.077 [2024-07-25 11:47:27.880265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:29.077 [2024-07-25 11:47:27.880291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:22:29.077 [2024-07-25 11:47:27.880307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.077 [2024-07-25 11:47:27.880628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.077 [2024-07-25 11:47:27.880662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:29.077 [2024-07-25 11:47:27.880690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:22:29.077 [2024-07-25 11:47:27.880706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.077 [2024-07-25 11:47:27.883886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.077 [2024-07-25 11:47:27.883929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:29.077 [2024-07-25 11:47:27.883946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.156 ms 00:22:29.077 [2024-07-25 11:47:27.883961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.077 [2024-07-25 11:47:27.890513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.077 [2024-07-25 11:47:27.890574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:29.077 [2024-07-25 11:47:27.890590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.527 ms 00:22:29.077 [2024-07-25 11:47:27.890605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.077 [2024-07-25 11:47:27.922194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.078 [2024-07-25 11:47:27.922242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:29.078 [2024-07-25 11:47:27.922260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.493 ms 00:22:29.078 [2024-07-25 11:47:27.922276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.078 [2024-07-25 11:47:27.941389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.078 [2024-07-25 11:47:27.941439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:29.078 [2024-07-25 11:47:27.941458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.063 ms 00:22:29.078 [2024-07-25 11:47:27.941474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.078 [2024-07-25 11:47:27.941671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.078 [2024-07-25 11:47:27.941698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:29.078 [2024-07-25 11:47:27.941713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:22:29.078 [2024-07-25 11:47:27.941728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.078 [2024-07-25 11:47:27.972198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.078 [2024-07-25 11:47:27.972246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:29.078 [2024-07-25 11:47:27.972264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.440 ms 00:22:29.078 [2024-07-25 11:47:27.972287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.078 [2024-07-25 11:47:28.002432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.078 [2024-07-25 11:47:28.002477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:29.078 [2024-07-25 11:47:28.002494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.093 ms 00:22:29.078 [2024-07-25 11:47:28.002510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.078 [2024-07-25 11:47:28.032379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.078 [2024-07-25 11:47:28.032428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:29.078 [2024-07-25 11:47:28.032445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.820 ms 00:22:29.078 [2024-07-25 11:47:28.032460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.078 [2024-07-25 11:47:28.072189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.078 [2024-07-25 11:47:28.072232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:29.078 [2024-07-25 11:47:28.072250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.602 ms 00:22:29.078 [2024-07-25 11:47:28.072265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.078 [2024-07-25 11:47:28.072323] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:29.078 [2024-07-25 11:47:28.072352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.072986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.073002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.073018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.073030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.073060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.073073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.073089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.073101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:29.078 [2024-07-25 11:47:28.073116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:29.079 [2024-07-25 11:47:28.073957] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:29.079 [2024-07-25 11:47:28.073970] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1430877-b154-4b5e-893a-2e52e6ce0696 00:22:29.079 [2024-07-25 11:47:28.073986] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:29.079 [2024-07-25 11:47:28.074010] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:29.079 [2024-07-25 11:47:28.074029] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:29.079 [2024-07-25 11:47:28.074042] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:29.079 [2024-07-25 11:47:28.074056] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:29.079 [2024-07-25 11:47:28.074069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:29.079 [2024-07-25 11:47:28.074084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:29.079 [2024-07-25 11:47:28.074095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:29.079 [2024-07-25 11:47:28.074108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:29.079 [2024-07-25 11:47:28.074121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.079 [2024-07-25 11:47:28.074136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:29.079 [2024-07-25 11:47:28.074150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:22:29.079 [2024-07-25 11:47:28.074168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.079 [2024-07-25 11:47:28.091079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.079 [2024-07-25 11:47:28.091122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:29.079 [2024-07-25 11:47:28.091139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.846 ms 00:22:29.079 [2024-07-25 11:47:28.091155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.079 [2024-07-25 11:47:28.091634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.079 [2024-07-25 11:47:28.091667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:29.079 [2024-07-25 11:47:28.091690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:22:29.079 [2024-07-25 11:47:28.091705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.337 [2024-07-25 11:47:28.145139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.337 [2024-07-25 11:47:28.145204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:29.337 [2024-07-25 11:47:28.145222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.337 [2024-07-25 11:47:28.145239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.337 [2024-07-25 11:47:28.145340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.337 [2024-07-25 11:47:28.145361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:29.337 [2024-07-25 11:47:28.145379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.337 [2024-07-25 11:47:28.145393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.337 [2024-07-25 11:47:28.145527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.337 [2024-07-25 11:47:28.145554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:29.337 [2024-07-25 11:47:28.145568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.337 [2024-07-25 11:47:28.145583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.337 [2024-07-25 11:47:28.145613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.338 [2024-07-25 11:47:28.145635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:29.338 [2024-07-25 11:47:28.145648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.338 [2024-07-25 11:47:28.145667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.338 [2024-07-25 11:47:28.249818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.338 [2024-07-25 11:47:28.249890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.338 [2024-07-25 11:47:28.249912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.338 [2024-07-25 11:47:28.249940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.338 [2024-07-25 11:47:28.335690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.338 [2024-07-25 11:47:28.335769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.338 [2024-07-25 11:47:28.335794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.338 [2024-07-25 11:47:28.335811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.338 [2024-07-25 11:47:28.335994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.338 [2024-07-25 11:47:28.336021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.338 [2024-07-25 11:47:28.336037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.338 [2024-07-25 11:47:28.336052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.338 [2024-07-25 11:47:28.336127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.338 [2024-07-25 11:47:28.336153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.338 [2024-07-25 11:47:28.336167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.338 [2024-07-25 11:47:28.336182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.338 [2024-07-25 11:47:28.336344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.338 [2024-07-25 11:47:28.336371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.338 [2024-07-25 11:47:28.336385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.338 [2024-07-25 11:47:28.336400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.338 [2024-07-25 11:47:28.336455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.338 [2024-07-25 11:47:28.336478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:29.338 [2024-07-25 11:47:28.336492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.338 [2024-07-25 11:47:28.336507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.338 [2024-07-25 11:47:28.336567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.338 [2024-07-25 11:47:28.336587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.338 [2024-07-25 11:47:28.336600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.338 [2024-07-25 11:47:28.336615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.338 [2024-07-25 11:47:28.336685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.338 [2024-07-25 11:47:28.336711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.338 [2024-07-25 11:47:28.336725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.338 [2024-07-25 11:47:28.336740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.338 [2024-07-25 11:47:28.336954] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.588 ms, result 0 00:22:29.338 true 00:22:29.338 11:47:28 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80598 00:22:29.338 11:47:28 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80598 ']' 00:22:29.338 11:47:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80598 00:22:29.338 11:47:28 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:22:29.338 11:47:28 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.338 11:47:28 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80598 00:22:29.338 11:47:28 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:29.338 11:47:28 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:29.338 killing process with pid 80598 00:22:29.338 11:47:28 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80598' 00:22:29.338 11:47:28 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 80598 00:22:29.338 11:47:28 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 80598 00:22:32.636 11:47:31 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:37.896 262144+0 records in 00:22:37.896 262144+0 records out 00:22:37.896 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.76161 s, 225 MB/s 00:22:37.896 11:47:35 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:39.268 11:47:38 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:39.268 [2024-07-25 11:47:38.244964] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:39.268 [2024-07-25 11:47:38.245136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80856 ] 00:22:39.525 [2024-07-25 11:47:38.416621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.783 [2024-07-25 11:47:38.701605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.040 [2024-07-25 11:47:39.070617] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:40.040 [2024-07-25 11:47:39.070747] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:40.298 [2024-07-25 11:47:39.239393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.298 [2024-07-25 11:47:39.239483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:40.298 [2024-07-25 11:47:39.239504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:40.298 [2024-07-25 11:47:39.239517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.298 [2024-07-25 11:47:39.239586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.298 [2024-07-25 11:47:39.239605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:40.298 [2024-07-25 11:47:39.239618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:40.298 [2024-07-25 11:47:39.239633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.298 [2024-07-25 11:47:39.239670] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:40.298 [2024-07-25 11:47:39.240611] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:40.298 [2024-07-25 11:47:39.240642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.298 [2024-07-25 11:47:39.240655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:40.298 [2024-07-25 11:47:39.240668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:22:40.298 [2024-07-25 11:47:39.240681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.298 [2024-07-25 11:47:39.242859] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:40.298 [2024-07-25 11:47:39.260977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.298 [2024-07-25 11:47:39.261042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:40.298 [2024-07-25 11:47:39.261065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.117 ms 00:22:40.298 [2024-07-25 11:47:39.261077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.298 [2024-07-25 11:47:39.261168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.298 [2024-07-25 11:47:39.261192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:40.298 [2024-07-25 11:47:39.261207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:40.298 [2024-07-25 11:47:39.261219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.298 [2024-07-25 11:47:39.270053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.299 [2024-07-25 11:47:39.270095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:40.299 [2024-07-25 11:47:39.270111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.723 ms 00:22:40.299 [2024-07-25 11:47:39.270122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.299 [2024-07-25 11:47:39.270237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.299 [2024-07-25 11:47:39.270256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:40.299 [2024-07-25 11:47:39.270270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:40.299 [2024-07-25 11:47:39.270281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.299 [2024-07-25 11:47:39.270356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.299 [2024-07-25 11:47:39.270375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:40.299 [2024-07-25 11:47:39.270388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:40.299 [2024-07-25 11:47:39.270399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.299 [2024-07-25 11:47:39.270447] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:40.299 [2024-07-25 11:47:39.275515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.299 [2024-07-25 11:47:39.275570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:40.299 [2024-07-25 11:47:39.275585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.080 ms 00:22:40.299 [2024-07-25 11:47:39.275596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.299 [2024-07-25 11:47:39.275644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.299 [2024-07-25 11:47:39.275660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:40.299 [2024-07-25 11:47:39.275673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:40.299 [2024-07-25 11:47:39.275684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.299 [2024-07-25 11:47:39.275756] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:40.299 [2024-07-25 11:47:39.275794] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:40.299 [2024-07-25 11:47:39.275842] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:40.299 [2024-07-25 11:47:39.275867] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:40.299 [2024-07-25 11:47:39.275990] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:40.299 [2024-07-25 11:47:39.276010] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:40.299 [2024-07-25 11:47:39.276026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:40.299 [2024-07-25 11:47:39.276042] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:40.299 [2024-07-25 11:47:39.276056] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:40.299 [2024-07-25 11:47:39.276069] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:40.299 [2024-07-25 11:47:39.276081] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:40.299 [2024-07-25 11:47:39.276092] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:40.299 [2024-07-25 11:47:39.276103] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:40.299 [2024-07-25 11:47:39.276115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.299 [2024-07-25 11:47:39.276132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:40.299 [2024-07-25 11:47:39.276144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:22:40.299 [2024-07-25 11:47:39.276155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.299 [2024-07-25 11:47:39.276255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.299 [2024-07-25 11:47:39.276283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:40.299 [2024-07-25 11:47:39.276297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:40.299 [2024-07-25 11:47:39.276308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.299 [2024-07-25 11:47:39.276416] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:40.299 [2024-07-25 11:47:39.276433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:40.299 [2024-07-25 11:47:39.276451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:40.299 [2024-07-25 11:47:39.276463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:40.299 [2024-07-25 11:47:39.276486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276497] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:40.299 [2024-07-25 11:47:39.276509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:40.299 [2024-07-25 11:47:39.276520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276538] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:40.299 [2024-07-25 11:47:39.276558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:40.299 [2024-07-25 11:47:39.276572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:40.299 [2024-07-25 11:47:39.276584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:40.299 [2024-07-25 11:47:39.276594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:40.299 [2024-07-25 11:47:39.276605] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:40.299 [2024-07-25 11:47:39.276616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:40.299 [2024-07-25 11:47:39.276646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:40.299 [2024-07-25 11:47:39.276657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:40.299 [2024-07-25 11:47:39.276694] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.299 [2024-07-25 11:47:39.276731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:40.299 [2024-07-25 11:47:39.276742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.299 [2024-07-25 11:47:39.276763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:40.299 [2024-07-25 11:47:39.276774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.299 [2024-07-25 11:47:39.276795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:40.299 [2024-07-25 11:47:39.276806] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276816] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.299 [2024-07-25 11:47:39.276827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:40.299 [2024-07-25 11:47:39.276838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:40.299 [2024-07-25 11:47:39.276859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:40.299 [2024-07-25 11:47:39.276870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:40.299 [2024-07-25 11:47:39.276881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:40.299 [2024-07-25 11:47:39.276892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:40.299 [2024-07-25 11:47:39.276903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:40.299 [2024-07-25 11:47:39.276913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:40.299 [2024-07-25 11:47:39.276953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:40.299 [2024-07-25 11:47:39.276965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.299 [2024-07-25 11:47:39.276976] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:40.299 [2024-07-25 11:47:39.276988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:40.299 [2024-07-25 11:47:39.277000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:40.299 [2024-07-25 11:47:39.277011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.299 [2024-07-25 11:47:39.277023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:40.299 [2024-07-25 11:47:39.277034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:40.299 [2024-07-25 11:47:39.277048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:40.299 [2024-07-25 11:47:39.277060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:40.299 [2024-07-25 11:47:39.277071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:40.299 [2024-07-25 11:47:39.277082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:40.299 [2024-07-25 11:47:39.277095] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:40.299 [2024-07-25 11:47:39.277111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:40.299 [2024-07-25 11:47:39.277124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:40.299 [2024-07-25 11:47:39.277136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:40.299 [2024-07-25 11:47:39.277148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:40.299 [2024-07-25 11:47:39.277159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:40.299 [2024-07-25 11:47:39.277171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:40.299 [2024-07-25 11:47:39.277182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:40.300 [2024-07-25 11:47:39.277194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:40.300 [2024-07-25 11:47:39.277205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:40.300 [2024-07-25 11:47:39.277216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:40.300 [2024-07-25 11:47:39.277228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:40.300 [2024-07-25 11:47:39.277239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:40.300 [2024-07-25 11:47:39.277251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:40.300 [2024-07-25 11:47:39.277262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:40.300 [2024-07-25 11:47:39.277275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:40.300 [2024-07-25 11:47:39.277286] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:40.300 [2024-07-25 11:47:39.277299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:40.300 [2024-07-25 11:47:39.277319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:40.300 [2024-07-25 11:47:39.277331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:40.300 [2024-07-25 11:47:39.277343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:40.300 [2024-07-25 11:47:39.277356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:40.300 [2024-07-25 11:47:39.277368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.300 [2024-07-25 11:47:39.277380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:40.300 [2024-07-25 11:47:39.277393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:22:40.300 [2024-07-25 11:47:39.277404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.300 [2024-07-25 11:47:39.325621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.300 [2024-07-25 11:47:39.325727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:40.300 [2024-07-25 11:47:39.325751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.144 ms 00:22:40.300 [2024-07-25 11:47:39.325764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.300 [2024-07-25 11:47:39.325910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.300 [2024-07-25 11:47:39.325942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:40.300 [2024-07-25 11:47:39.325958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:40.300 [2024-07-25 11:47:39.325970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.558 [2024-07-25 11:47:39.371804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.558 [2024-07-25 11:47:39.371887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:40.558 [2024-07-25 11:47:39.371907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.715 ms 00:22:40.558 [2024-07-25 11:47:39.371928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.558 [2024-07-25 11:47:39.372061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.558 [2024-07-25 11:47:39.372078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:40.558 [2024-07-25 11:47:39.372092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:40.558 [2024-07-25 11:47:39.372110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.558 [2024-07-25 11:47:39.372800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.558 [2024-07-25 11:47:39.372826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:40.558 [2024-07-25 11:47:39.372840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:22:40.558 [2024-07-25 11:47:39.372852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.558 [2024-07-25 11:47:39.373039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.558 [2024-07-25 11:47:39.373061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:40.558 [2024-07-25 11:47:39.373074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:22:40.559 [2024-07-25 11:47:39.373086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.393727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.393767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:40.559 [2024-07-25 11:47:39.393784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.606 ms 00:22:40.559 [2024-07-25 11:47:39.393800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.412463] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:40.559 [2024-07-25 11:47:39.412520] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:40.559 [2024-07-25 11:47:39.412541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.412554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:40.559 [2024-07-25 11:47:39.412567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.538 ms 00:22:40.559 [2024-07-25 11:47:39.412578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.443835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.443945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:40.559 [2024-07-25 11:47:39.443982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.199 ms 00:22:40.559 [2024-07-25 11:47:39.443995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.462123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.462172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:40.559 [2024-07-25 11:47:39.462192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.993 ms 00:22:40.559 [2024-07-25 11:47:39.462204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.479295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.479339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:40.559 [2024-07-25 11:47:39.479359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.036 ms 00:22:40.559 [2024-07-25 11:47:39.479370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.480468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.480505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:40.559 [2024-07-25 11:47:39.480521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:22:40.559 [2024-07-25 11:47:39.480532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.567473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.567549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:40.559 [2024-07-25 11:47:39.567570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.905 ms 00:22:40.559 [2024-07-25 11:47:39.567582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.582520] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:40.559 [2024-07-25 11:47:39.587346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.587397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:40.559 [2024-07-25 11:47:39.587430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.636 ms 00:22:40.559 [2024-07-25 11:47:39.587458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.587641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.587680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:40.559 [2024-07-25 11:47:39.587694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:40.559 [2024-07-25 11:47:39.587706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.587837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.587869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:40.559 [2024-07-25 11:47:39.587883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:40.559 [2024-07-25 11:47:39.587895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.587944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.587963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:40.559 [2024-07-25 11:47:39.587977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:40.559 [2024-07-25 11:47:39.587988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.559 [2024-07-25 11:47:39.588032] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:40.559 [2024-07-25 11:47:39.588050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.559 [2024-07-25 11:47:39.588061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:40.559 [2024-07-25 11:47:39.588079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:40.559 [2024-07-25 11:47:39.588091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.816 [2024-07-25 11:47:39.624450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.816 [2024-07-25 11:47:39.624510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:40.816 [2024-07-25 11:47:39.624531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.333 ms 00:22:40.816 [2024-07-25 11:47:39.624543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.816 [2024-07-25 11:47:39.624672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.816 [2024-07-25 11:47:39.624697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:40.816 [2024-07-25 11:47:39.624711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:40.816 [2024-07-25 11:47:39.624723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.816 [2024-07-25 11:47:39.626190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.188 ms, result 0 00:23:18.713  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 105/1024 [MB] (26 MBps) Copying: 131/1024 [MB] (25 MBps) Copying: 162/1024 [MB] (31 MBps) Copying: 195/1024 [MB] (32 MBps) Copying: 227/1024 [MB] (32 MBps) Copying: 260/1024 [MB] (32 MBps) Copying: 292/1024 [MB] (32 MBps) Copying: 321/1024 [MB] (28 MBps) Copying: 347/1024 [MB] (26 MBps) Copying: 375/1024 [MB] (27 MBps) Copying: 402/1024 [MB] (27 MBps) Copying: 430/1024 [MB] (27 MBps) Copying: 455/1024 [MB] (25 MBps) Copying: 482/1024 [MB] (26 MBps) Copying: 507/1024 [MB] (25 MBps) Copying: 532/1024 [MB] (25 MBps) Copying: 558/1024 [MB] (25 MBps) Copying: 583/1024 [MB] (25 MBps) Copying: 608/1024 [MB] (24 MBps) Copying: 633/1024 [MB] (25 MBps) Copying: 659/1024 [MB] (25 MBps) Copying: 685/1024 [MB] (25 MBps) Copying: 711/1024 [MB] (26 MBps) Copying: 737/1024 [MB] (25 MBps) Copying: 763/1024 [MB] (26 MBps) Copying: 789/1024 [MB] (25 MBps) Copying: 815/1024 [MB] (26 MBps) Copying: 841/1024 [MB] (26 MBps) Copying: 867/1024 [MB] (25 MBps) Copying: 892/1024 [MB] (25 MBps) Copying: 918/1024 [MB] (25 MBps) Copying: 944/1024 [MB] (26 MBps) Copying: 970/1024 [MB] (25 MBps) Copying: 996/1024 [MB] (25 MBps) Copying: 1021/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-25 11:48:17.727495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.713 [2024-07-25 11:48:17.727578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:18.713 [2024-07-25 11:48:17.727602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:18.713 [2024-07-25 11:48:17.727615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.713 [2024-07-25 11:48:17.727645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:18.713 [2024-07-25 11:48:17.731363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.713 [2024-07-25 11:48:17.731400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:18.713 [2024-07-25 11:48:17.731416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:23:18.713 [2024-07-25 11:48:17.731428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.713 [2024-07-25 11:48:17.733175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.713 [2024-07-25 11:48:17.733217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:18.713 [2024-07-25 11:48:17.733234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:23:18.713 [2024-07-25 11:48:17.733245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.713 [2024-07-25 11:48:17.748872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.713 [2024-07-25 11:48:17.748927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:18.713 [2024-07-25 11:48:17.748946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.605 ms 00:23:18.713 [2024-07-25 11:48:17.748958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.713 [2024-07-25 11:48:17.755565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.713 [2024-07-25 11:48:17.755611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:18.713 [2024-07-25 11:48:17.755626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.561 ms 00:23:18.713 [2024-07-25 11:48:17.755637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.972 [2024-07-25 11:48:17.788037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.972 [2024-07-25 11:48:17.788103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:18.972 [2024-07-25 11:48:17.788139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.335 ms 00:23:18.972 [2024-07-25 11:48:17.788161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.972 [2024-07-25 11:48:17.806594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.972 [2024-07-25 11:48:17.806642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:18.972 [2024-07-25 11:48:17.806661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.386 ms 00:23:18.972 [2024-07-25 11:48:17.806673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.972 [2024-07-25 11:48:17.806819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.972 [2024-07-25 11:48:17.806839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:18.972 [2024-07-25 11:48:17.806853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:23:18.972 [2024-07-25 11:48:17.806870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.972 [2024-07-25 11:48:17.838589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.972 [2024-07-25 11:48:17.838663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:18.972 [2024-07-25 11:48:17.838682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.697 ms 00:23:18.972 [2024-07-25 11:48:17.838693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.972 [2024-07-25 11:48:17.870091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.972 [2024-07-25 11:48:17.870152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:18.972 [2024-07-25 11:48:17.870201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.348 ms 00:23:18.972 [2024-07-25 11:48:17.870212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.972 [2024-07-25 11:48:17.901425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.972 [2024-07-25 11:48:17.901484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:18.972 [2024-07-25 11:48:17.901517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.169 ms 00:23:18.972 [2024-07-25 11:48:17.901560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.972 [2024-07-25 11:48:17.933000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.972 [2024-07-25 11:48:17.933046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:18.972 [2024-07-25 11:48:17.933064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.314 ms 00:23:18.972 [2024-07-25 11:48:17.933075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.972 [2024-07-25 11:48:17.933120] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:18.972 [2024-07-25 11:48:17.933146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:18.972 [2024-07-25 11:48:17.933784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.933996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:18.973 [2024-07-25 11:48:17.934428] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:18.973 [2024-07-25 11:48:17.934439] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1430877-b154-4b5e-893a-2e52e6ce0696 00:23:18.973 [2024-07-25 11:48:17.934457] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:18.973 [2024-07-25 11:48:17.934476] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:18.973 [2024-07-25 11:48:17.934487] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:18.973 [2024-07-25 11:48:17.934498] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:18.973 [2024-07-25 11:48:17.934510] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:18.973 [2024-07-25 11:48:17.934521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:18.973 [2024-07-25 11:48:17.934533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:18.973 [2024-07-25 11:48:17.934543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:18.973 [2024-07-25 11:48:17.934553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:18.973 [2024-07-25 11:48:17.934564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.973 [2024-07-25 11:48:17.934576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:18.973 [2024-07-25 11:48:17.934588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:23:18.973 [2024-07-25 11:48:17.934604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.973 [2024-07-25 11:48:17.952002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.973 [2024-07-25 11:48:17.952046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:18.973 [2024-07-25 11:48:17.952063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.357 ms 00:23:18.973 [2024-07-25 11:48:17.952090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.973 [2024-07-25 11:48:17.952569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.973 [2024-07-25 11:48:17.952601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:18.973 [2024-07-25 11:48:17.952617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:23:18.973 [2024-07-25 11:48:17.952629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.973 [2024-07-25 11:48:17.991992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.973 [2024-07-25 11:48:17.992063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.973 [2024-07-25 11:48:17.992083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.973 [2024-07-25 11:48:17.992096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.973 [2024-07-25 11:48:17.992187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.973 [2024-07-25 11:48:17.992204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.973 [2024-07-25 11:48:17.992217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.973 [2024-07-25 11:48:17.992229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.973 [2024-07-25 11:48:17.992350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.973 [2024-07-25 11:48:17.992371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.973 [2024-07-25 11:48:17.992384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.973 [2024-07-25 11:48:17.992395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.973 [2024-07-25 11:48:17.992418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.973 [2024-07-25 11:48:17.992433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.973 [2024-07-25 11:48:17.992445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.973 [2024-07-25 11:48:17.992457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.232 [2024-07-25 11:48:18.100138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.232 [2024-07-25 11:48:18.100217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.232 [2024-07-25 11:48:18.100238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.232 [2024-07-25 11:48:18.100250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.232 [2024-07-25 11:48:18.188942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.232 [2024-07-25 11:48:18.189063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.232 [2024-07-25 11:48:18.189099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.232 [2024-07-25 11:48:18.189112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.232 [2024-07-25 11:48:18.189243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.232 [2024-07-25 11:48:18.189267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:19.232 [2024-07-25 11:48:18.189280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.232 [2024-07-25 11:48:18.189292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.232 [2024-07-25 11:48:18.189348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.232 [2024-07-25 11:48:18.189364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:19.232 [2024-07-25 11:48:18.189376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.232 [2024-07-25 11:48:18.189387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.232 [2024-07-25 11:48:18.189513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.232 [2024-07-25 11:48:18.189533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:19.232 [2024-07-25 11:48:18.189552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.232 [2024-07-25 11:48:18.189563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.232 [2024-07-25 11:48:18.189612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.232 [2024-07-25 11:48:18.189630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:19.232 [2024-07-25 11:48:18.189643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.232 [2024-07-25 11:48:18.189655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.232 [2024-07-25 11:48:18.189716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.232 [2024-07-25 11:48:18.189731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:19.232 [2024-07-25 11:48:18.189750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.232 [2024-07-25 11:48:18.189762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.232 [2024-07-25 11:48:18.189819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.232 [2024-07-25 11:48:18.189842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:19.232 [2024-07-25 11:48:18.189854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.232 [2024-07-25 11:48:18.189865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.232 [2024-07-25 11:48:18.190041] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 462.509 ms, result 0 00:23:20.608 00:23:20.608 00:23:20.608 11:48:19 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:20.608 [2024-07-25 11:48:19.488033] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:20.608 [2024-07-25 11:48:19.488230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81269 ] 00:23:20.608 [2024-07-25 11:48:19.653113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.866 [2024-07-25 11:48:19.881266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.433 [2024-07-25 11:48:20.219005] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.433 [2024-07-25 11:48:20.219156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.433 [2024-07-25 11:48:20.382800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.433 [2024-07-25 11:48:20.382883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:21.433 [2024-07-25 11:48:20.382920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:21.433 [2024-07-25 11:48:20.382978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.433 [2024-07-25 11:48:20.383052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.433 [2024-07-25 11:48:20.383073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.433 [2024-07-25 11:48:20.383087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:21.433 [2024-07-25 11:48:20.383103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.433 [2024-07-25 11:48:20.383149] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:21.433 [2024-07-25 11:48:20.384078] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:21.433 [2024-07-25 11:48:20.384125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.433 [2024-07-25 11:48:20.384141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.433 [2024-07-25 11:48:20.384154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:23:21.433 [2024-07-25 11:48:20.384167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.433 [2024-07-25 11:48:20.386384] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:21.433 [2024-07-25 11:48:20.402300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.433 [2024-07-25 11:48:20.402365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:21.433 [2024-07-25 11:48:20.402400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.918 ms 00:23:21.433 [2024-07-25 11:48:20.402412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.433 [2024-07-25 11:48:20.402487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.433 [2024-07-25 11:48:20.402511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:21.433 [2024-07-25 11:48:20.402524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:21.433 [2024-07-25 11:48:20.402535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.433 [2024-07-25 11:48:20.411798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.433 [2024-07-25 11:48:20.411859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.433 [2024-07-25 11:48:20.411891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.138 ms 00:23:21.433 [2024-07-25 11:48:20.411902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.433 [2024-07-25 11:48:20.412020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.433 [2024-07-25 11:48:20.412055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.433 [2024-07-25 11:48:20.412068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:21.433 [2024-07-25 11:48:20.412079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.433 [2024-07-25 11:48:20.412175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.433 [2024-07-25 11:48:20.412194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:21.433 [2024-07-25 11:48:20.412207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:21.433 [2024-07-25 11:48:20.412218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.433 [2024-07-25 11:48:20.412256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:21.433 [2024-07-25 11:48:20.417112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.433 [2024-07-25 11:48:20.417153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.433 [2024-07-25 11:48:20.417184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.867 ms 00:23:21.433 [2024-07-25 11:48:20.417196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.433 [2024-07-25 11:48:20.417248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.433 [2024-07-25 11:48:20.417267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:21.433 [2024-07-25 11:48:20.417279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:21.433 [2024-07-25 11:48:20.417290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.433 [2024-07-25 11:48:20.417398] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:21.433 [2024-07-25 11:48:20.417436] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:21.433 [2024-07-25 11:48:20.417484] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:21.433 [2024-07-25 11:48:20.417512] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:21.433 [2024-07-25 11:48:20.417624] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:21.433 [2024-07-25 11:48:20.417642] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:21.433 [2024-07-25 11:48:20.417658] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:21.433 [2024-07-25 11:48:20.417675] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:21.433 [2024-07-25 11:48:20.417690] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:21.434 [2024-07-25 11:48:20.417703] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:21.434 [2024-07-25 11:48:20.417715] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:21.434 [2024-07-25 11:48:20.417726] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:21.434 [2024-07-25 11:48:20.417738] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:21.434 [2024-07-25 11:48:20.417750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.434 [2024-07-25 11:48:20.417768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:21.434 [2024-07-25 11:48:20.417781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:23:21.434 [2024-07-25 11:48:20.417793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.434 [2024-07-25 11:48:20.417888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.434 [2024-07-25 11:48:20.417905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:21.434 [2024-07-25 11:48:20.417917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:21.434 [2024-07-25 11:48:20.417928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.434 [2024-07-25 11:48:20.418059] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:21.434 [2024-07-25 11:48:20.418081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:21.434 [2024-07-25 11:48:20.418101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.434 [2024-07-25 11:48:20.418114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:21.434 [2024-07-25 11:48:20.418137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:21.434 [2024-07-25 11:48:20.418158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:21.434 [2024-07-25 11:48:20.418169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418180] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.434 [2024-07-25 11:48:20.418191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:21.434 [2024-07-25 11:48:20.418202] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:21.434 [2024-07-25 11:48:20.418212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.434 [2024-07-25 11:48:20.418223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:21.434 [2024-07-25 11:48:20.418233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:21.434 [2024-07-25 11:48:20.418244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:21.434 [2024-07-25 11:48:20.418268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:21.434 [2024-07-25 11:48:20.418279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:21.434 [2024-07-25 11:48:20.418316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.434 [2024-07-25 11:48:20.418339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:21.434 [2024-07-25 11:48:20.418350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418360] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.434 [2024-07-25 11:48:20.418371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:21.434 [2024-07-25 11:48:20.418382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.434 [2024-07-25 11:48:20.418403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:21.434 [2024-07-25 11:48:20.418414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.434 [2024-07-25 11:48:20.418435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:21.434 [2024-07-25 11:48:20.418446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.434 [2024-07-25 11:48:20.418467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:21.434 [2024-07-25 11:48:20.418477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:21.434 [2024-07-25 11:48:20.418488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.434 [2024-07-25 11:48:20.418499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:21.434 [2024-07-25 11:48:20.418509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:21.434 [2024-07-25 11:48:20.418520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:21.434 [2024-07-25 11:48:20.418541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:21.434 [2024-07-25 11:48:20.418552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418563] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:21.434 [2024-07-25 11:48:20.418575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:21.434 [2024-07-25 11:48:20.418589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.434 [2024-07-25 11:48:20.418600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.434 [2024-07-25 11:48:20.418612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:21.434 [2024-07-25 11:48:20.418626] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:21.434 [2024-07-25 11:48:20.418637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:21.434 [2024-07-25 11:48:20.418649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:21.434 [2024-07-25 11:48:20.418660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:21.434 [2024-07-25 11:48:20.418672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:21.434 [2024-07-25 11:48:20.418685] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:21.434 [2024-07-25 11:48:20.418700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.434 [2024-07-25 11:48:20.418714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:21.434 [2024-07-25 11:48:20.418726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:21.434 [2024-07-25 11:48:20.418738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:21.434 [2024-07-25 11:48:20.418750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:21.434 [2024-07-25 11:48:20.418762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:21.434 [2024-07-25 11:48:20.418774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:21.434 [2024-07-25 11:48:20.418785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:21.434 [2024-07-25 11:48:20.418797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:21.434 [2024-07-25 11:48:20.418809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:21.434 [2024-07-25 11:48:20.418820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:21.434 [2024-07-25 11:48:20.418832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:21.434 [2024-07-25 11:48:20.418844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:21.434 [2024-07-25 11:48:20.418856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:21.434 [2024-07-25 11:48:20.418868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:21.434 [2024-07-25 11:48:20.418880] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:21.434 [2024-07-25 11:48:20.418893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.434 [2024-07-25 11:48:20.418912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:21.434 [2024-07-25 11:48:20.418951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:21.434 [2024-07-25 11:48:20.418965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:21.434 [2024-07-25 11:48:20.418977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:21.434 [2024-07-25 11:48:20.418990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.434 [2024-07-25 11:48:20.419003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:21.434 [2024-07-25 11:48:20.419016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:23:21.434 [2024-07-25 11:48:20.419027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.434 [2024-07-25 11:48:20.468206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.434 [2024-07-25 11:48:20.468326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.434 [2024-07-25 11:48:20.468350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.107 ms 00:23:21.434 [2024-07-25 11:48:20.468364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.434 [2024-07-25 11:48:20.468506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.434 [2024-07-25 11:48:20.468525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:21.434 [2024-07-25 11:48:20.468539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:21.435 [2024-07-25 11:48:20.468551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.512500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.512565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.694 [2024-07-25 11:48:20.512604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.840 ms 00:23:21.694 [2024-07-25 11:48:20.512640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.512724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.512742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.694 [2024-07-25 11:48:20.512756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:21.694 [2024-07-25 11:48:20.512774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.513485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.513516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.694 [2024-07-25 11:48:20.513562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:23:21.694 [2024-07-25 11:48:20.513574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.513770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.513791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.694 [2024-07-25 11:48:20.513804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:23:21.694 [2024-07-25 11:48:20.513815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.532842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.532921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.694 [2024-07-25 11:48:20.532957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.991 ms 00:23:21.694 [2024-07-25 11:48:20.532978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.550530] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:21.694 [2024-07-25 11:48:20.550592] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:21.694 [2024-07-25 11:48:20.550627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.550640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:21.694 [2024-07-25 11:48:20.550655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.483 ms 00:23:21.694 [2024-07-25 11:48:20.550666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.579688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.579754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:21.694 [2024-07-25 11:48:20.579788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.972 ms 00:23:21.694 [2024-07-25 11:48:20.579800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.594315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.594354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:21.694 [2024-07-25 11:48:20.594386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.464 ms 00:23:21.694 [2024-07-25 11:48:20.594398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.609101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.609159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:21.694 [2024-07-25 11:48:20.609192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.658 ms 00:23:21.694 [2024-07-25 11:48:20.609203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.610144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.610179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:21.694 [2024-07-25 11:48:20.610212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:23:21.694 [2024-07-25 11:48:20.610224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.685001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.685101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:21.694 [2024-07-25 11:48:20.685140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.727 ms 00:23:21.694 [2024-07-25 11:48:20.685161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.698138] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:21.694 [2024-07-25 11:48:20.702505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.702569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:21.694 [2024-07-25 11:48:20.702591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.241 ms 00:23:21.694 [2024-07-25 11:48:20.702604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.702757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.702780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:21.694 [2024-07-25 11:48:20.702796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:21.694 [2024-07-25 11:48:20.702808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.702947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.702975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:21.694 [2024-07-25 11:48:20.702990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:21.694 [2024-07-25 11:48:20.703002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.703040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.703057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:21.694 [2024-07-25 11:48:20.703071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:21.694 [2024-07-25 11:48:20.703082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.703130] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:21.694 [2024-07-25 11:48:20.703150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.703168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:21.694 [2024-07-25 11:48:20.703180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:21.694 [2024-07-25 11:48:20.703192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.735583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.735648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:21.694 [2024-07-25 11:48:20.735668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.364 ms 00:23:21.694 [2024-07-25 11:48:20.735688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.735798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.694 [2024-07-25 11:48:20.735820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:21.694 [2024-07-25 11:48:20.735834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:21.694 [2024-07-25 11:48:20.735846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.694 [2024-07-25 11:48:20.737355] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 353.939 ms, result 0 00:24:02.763  Copying: 25/1024 [MB] (25 MBps) Copying: 50/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 101/1024 [MB] (25 MBps) Copying: 126/1024 [MB] (24 MBps) Copying: 151/1024 [MB] (24 MBps) Copying: 176/1024 [MB] (25 MBps) Copying: 202/1024 [MB] (26 MBps) Copying: 228/1024 [MB] (26 MBps) Copying: 254/1024 [MB] (25 MBps) Copying: 280/1024 [MB] (25 MBps) Copying: 306/1024 [MB] (26 MBps) Copying: 333/1024 [MB] (26 MBps) Copying: 358/1024 [MB] (25 MBps) Copying: 383/1024 [MB] (24 MBps) Copying: 408/1024 [MB] (24 MBps) Copying: 433/1024 [MB] (25 MBps) Copying: 459/1024 [MB] (25 MBps) Copying: 485/1024 [MB] (25 MBps) Copying: 510/1024 [MB] (25 MBps) Copying: 537/1024 [MB] (26 MBps) Copying: 562/1024 [MB] (25 MBps) Copying: 587/1024 [MB] (25 MBps) Copying: 612/1024 [MB] (25 MBps) Copying: 637/1024 [MB] (25 MBps) Copying: 662/1024 [MB] (24 MBps) Copying: 688/1024 [MB] (25 MBps) Copying: 714/1024 [MB] (25 MBps) Copying: 739/1024 [MB] (25 MBps) Copying: 765/1024 [MB] (25 MBps) Copying: 790/1024 [MB] (24 MBps) Copying: 814/1024 [MB] (24 MBps) Copying: 839/1024 [MB] (25 MBps) Copying: 863/1024 [MB] (23 MBps) Copying: 888/1024 [MB] (24 MBps) Copying: 913/1024 [MB] (25 MBps) Copying: 939/1024 [MB] (25 MBps) Copying: 963/1024 [MB] (23 MBps) Copying: 986/1024 [MB] (23 MBps) Copying: 1010/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-25 11:49:01.673854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.763 [2024-07-25 11:49:01.674026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:02.763 [2024-07-25 11:49:01.674073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:02.763 [2024-07-25 11:49:01.674093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.763 [2024-07-25 11:49:01.674144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:02.763 [2024-07-25 11:49:01.679404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.763 [2024-07-25 11:49:01.679464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:02.763 [2024-07-25 11:49:01.679503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.222 ms 00:24:02.763 [2024-07-25 11:49:01.679537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.763 [2024-07-25 11:49:01.679981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.763 [2024-07-25 11:49:01.680035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:02.763 [2024-07-25 11:49:01.680062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:24:02.763 [2024-07-25 11:49:01.680084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.763 [2024-07-25 11:49:01.685724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.763 [2024-07-25 11:49:01.685780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:02.763 [2024-07-25 11:49:01.685815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.602 ms 00:24:02.763 [2024-07-25 11:49:01.685836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.763 [2024-07-25 11:49:01.693701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.763 [2024-07-25 11:49:01.693744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:02.763 [2024-07-25 11:49:01.693762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.814 ms 00:24:02.763 [2024-07-25 11:49:01.693776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.763 [2024-07-25 11:49:01.725935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.763 [2024-07-25 11:49:01.725996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:02.763 [2024-07-25 11:49:01.726017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.056 ms 00:24:02.763 [2024-07-25 11:49:01.726031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.763 [2024-07-25 11:49:01.743892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.763 [2024-07-25 11:49:01.743953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:02.763 [2024-07-25 11:49:01.743991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.770 ms 00:24:02.763 [2024-07-25 11:49:01.744005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.763 [2024-07-25 11:49:01.744216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.763 [2024-07-25 11:49:01.744256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:02.763 [2024-07-25 11:49:01.744294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:24:02.763 [2024-07-25 11:49:01.744312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.763 [2024-07-25 11:49:01.775041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.763 [2024-07-25 11:49:01.775104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:02.763 [2024-07-25 11:49:01.775141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.699 ms 00:24:02.763 [2024-07-25 11:49:01.775155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.763 [2024-07-25 11:49:01.806053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.763 [2024-07-25 11:49:01.806108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:02.763 [2024-07-25 11:49:01.806143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.769 ms 00:24:02.763 [2024-07-25 11:49:01.806155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.022 [2024-07-25 11:49:01.835601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.022 [2024-07-25 11:49:01.835653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:03.022 [2024-07-25 11:49:01.835707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.336 ms 00:24:03.022 [2024-07-25 11:49:01.835722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.022 [2024-07-25 11:49:01.865687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.022 [2024-07-25 11:49:01.865762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:03.022 [2024-07-25 11:49:01.865784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.854 ms 00:24:03.022 [2024-07-25 11:49:01.865798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.022 [2024-07-25 11:49:01.865857] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:03.022 [2024-07-25 11:49:01.865889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.865908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.865937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.865955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.865970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.865984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.865999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.866014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.866029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.866044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.866058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.866073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.866088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.866102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:03.022 [2024-07-25 11:49:01.866117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.866993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:03.023 [2024-07-25 11:49:01.867276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:03.024 [2024-07-25 11:49:01.867292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:03.024 [2024-07-25 11:49:01.867308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:03.024 [2024-07-25 11:49:01.867325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:03.024 [2024-07-25 11:49:01.867341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:03.024 [2024-07-25 11:49:01.867357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:03.024 [2024-07-25 11:49:01.867373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:03.024 [2024-07-25 11:49:01.867389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:03.024 [2024-07-25 11:49:01.867407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:03.024 [2024-07-25 11:49:01.867424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:03.024 [2024-07-25 11:49:01.867449] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:03.024 [2024-07-25 11:49:01.867466] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1430877-b154-4b5e-893a-2e52e6ce0696 00:24:03.024 [2024-07-25 11:49:01.867490] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:03.024 [2024-07-25 11:49:01.867505] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:03.024 [2024-07-25 11:49:01.867518] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:03.024 [2024-07-25 11:49:01.867532] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:03.024 [2024-07-25 11:49:01.867546] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:03.024 [2024-07-25 11:49:01.867560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:03.024 [2024-07-25 11:49:01.867574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:03.024 [2024-07-25 11:49:01.867586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:03.024 [2024-07-25 11:49:01.867598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:03.024 [2024-07-25 11:49:01.867611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.024 [2024-07-25 11:49:01.867625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:03.024 [2024-07-25 11:49:01.867647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.757 ms 00:24:03.024 [2024-07-25 11:49:01.867661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.024 [2024-07-25 11:49:01.884522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.024 [2024-07-25 11:49:01.884572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:03.024 [2024-07-25 11:49:01.884612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.804 ms 00:24:03.024 [2024-07-25 11:49:01.884627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.024 [2024-07-25 11:49:01.885130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.024 [2024-07-25 11:49:01.885167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:03.024 [2024-07-25 11:49:01.885185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:24:03.024 [2024-07-25 11:49:01.885200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.024 [2024-07-25 11:49:01.923010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.024 [2024-07-25 11:49:01.923075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:03.024 [2024-07-25 11:49:01.923118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.024 [2024-07-25 11:49:01.923132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.024 [2024-07-25 11:49:01.923207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.024 [2024-07-25 11:49:01.923227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:03.024 [2024-07-25 11:49:01.923242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.024 [2024-07-25 11:49:01.923256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.024 [2024-07-25 11:49:01.923368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.024 [2024-07-25 11:49:01.923407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:03.024 [2024-07-25 11:49:01.923425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.024 [2024-07-25 11:49:01.923439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.024 [2024-07-25 11:49:01.923467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.024 [2024-07-25 11:49:01.923491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:03.024 [2024-07-25 11:49:01.923506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.024 [2024-07-25 11:49:01.923519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.024 [2024-07-25 11:49:02.021104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.024 [2024-07-25 11:49:02.021200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:03.024 [2024-07-25 11:49:02.021242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.024 [2024-07-25 11:49:02.021257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.282 [2024-07-25 11:49:02.103723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.282 [2024-07-25 11:49:02.103832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.282 [2024-07-25 11:49:02.103871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.282 [2024-07-25 11:49:02.103885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.282 [2024-07-25 11:49:02.104118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.282 [2024-07-25 11:49:02.104144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:03.282 [2024-07-25 11:49:02.104160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.282 [2024-07-25 11:49:02.104179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.282 [2024-07-25 11:49:02.104240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.282 [2024-07-25 11:49:02.104261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:03.282 [2024-07-25 11:49:02.104287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.282 [2024-07-25 11:49:02.104304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.282 [2024-07-25 11:49:02.104446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.282 [2024-07-25 11:49:02.104489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:03.282 [2024-07-25 11:49:02.104507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.282 [2024-07-25 11:49:02.104521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.282 [2024-07-25 11:49:02.104606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.282 [2024-07-25 11:49:02.104637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:03.282 [2024-07-25 11:49:02.104653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.282 [2024-07-25 11:49:02.104667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.282 [2024-07-25 11:49:02.104744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.282 [2024-07-25 11:49:02.104786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:03.282 [2024-07-25 11:49:02.104802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.282 [2024-07-25 11:49:02.104815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.282 [2024-07-25 11:49:02.104880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.282 [2024-07-25 11:49:02.104900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:03.282 [2024-07-25 11:49:02.104916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.282 [2024-07-25 11:49:02.104951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.282 [2024-07-25 11:49:02.105128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 431.242 ms, result 0 00:24:04.218 00:24:04.218 00:24:04.475 11:49:03 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:06.375 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:06.375 11:49:05 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:06.633 [2024-07-25 11:49:05.508837] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:06.633 [2024-07-25 11:49:05.509059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81725 ] 00:24:06.633 [2024-07-25 11:49:05.675522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.891 [2024-07-25 11:49:05.928854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.458 [2024-07-25 11:49:06.265572] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.458 [2024-07-25 11:49:06.265701] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.458 [2024-07-25 11:49:06.428435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.428500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.458 [2024-07-25 11:49:06.428523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:07.458 [2024-07-25 11:49:06.428536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.428603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.428623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.458 [2024-07-25 11:49:06.428637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:07.458 [2024-07-25 11:49:06.428653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.428699] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.458 [2024-07-25 11:49:06.429767] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.458 [2024-07-25 11:49:06.429814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.429829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.458 [2024-07-25 11:49:06.429843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:24:07.458 [2024-07-25 11:49:06.429855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.431879] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.458 [2024-07-25 11:49:06.448929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.449043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.458 [2024-07-25 11:49:06.449078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.052 ms 00:24:07.458 [2024-07-25 11:49:06.449091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.449162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.449185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.458 [2024-07-25 11:49:06.449199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:07.458 [2024-07-25 11:49:06.449210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.457933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.458003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.458 [2024-07-25 11:49:06.458036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.608 ms 00:24:07.458 [2024-07-25 11:49:06.458049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.458152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.458172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.458 [2024-07-25 11:49:06.458186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:07.458 [2024-07-25 11:49:06.458198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.458286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.458312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:07.458 [2024-07-25 11:49:06.458329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:07.458 [2024-07-25 11:49:06.458349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.458412] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:07.458 [2024-07-25 11:49:06.463374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.463431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.458 [2024-07-25 11:49:06.463463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.978 ms 00:24:07.458 [2024-07-25 11:49:06.463475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.463536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.463554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:07.458 [2024-07-25 11:49:06.463567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:07.458 [2024-07-25 11:49:06.463579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.463650] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:07.458 [2024-07-25 11:49:06.463715] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:07.458 [2024-07-25 11:49:06.463788] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:07.458 [2024-07-25 11:49:06.463837] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:07.458 [2024-07-25 11:49:06.463991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:07.458 [2024-07-25 11:49:06.464026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:07.458 [2024-07-25 11:49:06.464044] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:07.458 [2024-07-25 11:49:06.464060] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:07.458 [2024-07-25 11:49:06.464080] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:07.458 [2024-07-25 11:49:06.464106] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:07.458 [2024-07-25 11:49:06.464127] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:07.458 [2024-07-25 11:49:06.464148] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:07.458 [2024-07-25 11:49:06.464168] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:07.458 [2024-07-25 11:49:06.464191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.458 [2024-07-25 11:49:06.464224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:07.458 [2024-07-25 11:49:06.464244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:24:07.458 [2024-07-25 11:49:06.464257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.458 [2024-07-25 11:49:06.464399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.459 [2024-07-25 11:49:06.464428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:07.459 [2024-07-25 11:49:06.464451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:07.459 [2024-07-25 11:49:06.464473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.459 [2024-07-25 11:49:06.464623] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:07.459 [2024-07-25 11:49:06.464676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:07.459 [2024-07-25 11:49:06.464710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.459 [2024-07-25 11:49:06.464734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.459 [2024-07-25 11:49:06.464754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:07.459 [2024-07-25 11:49:06.464775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:07.459 [2024-07-25 11:49:06.464793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:07.459 [2024-07-25 11:49:06.464805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:07.459 [2024-07-25 11:49:06.464823] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:07.459 [2024-07-25 11:49:06.464844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.459 [2024-07-25 11:49:06.464867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:07.459 [2024-07-25 11:49:06.464890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:07.459 [2024-07-25 11:49:06.464910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.459 [2024-07-25 11:49:06.464955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:07.459 [2024-07-25 11:49:06.464979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:07.459 [2024-07-25 11:49:06.464999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.459 [2024-07-25 11:49:06.465018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:07.459 [2024-07-25 11:49:06.465040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:07.459 [2024-07-25 11:49:06.465062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.459 [2024-07-25 11:49:06.465082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:07.459 [2024-07-25 11:49:06.465128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:07.459 [2024-07-25 11:49:06.465153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.459 [2024-07-25 11:49:06.465173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:07.459 [2024-07-25 11:49:06.465193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:07.459 [2024-07-25 11:49:06.465212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.459 [2024-07-25 11:49:06.465232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:07.459 [2024-07-25 11:49:06.465252] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:07.459 [2024-07-25 11:49:06.465266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.459 [2024-07-25 11:49:06.465278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:07.459 [2024-07-25 11:49:06.465290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:07.459 [2024-07-25 11:49:06.465301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.459 [2024-07-25 11:49:06.465319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:07.459 [2024-07-25 11:49:06.465340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:07.459 [2024-07-25 11:49:06.465360] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.459 [2024-07-25 11:49:06.465382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:07.459 [2024-07-25 11:49:06.465404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:07.459 [2024-07-25 11:49:06.465426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.459 [2024-07-25 11:49:06.465447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:07.459 [2024-07-25 11:49:06.465468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:07.459 [2024-07-25 11:49:06.465482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.459 [2024-07-25 11:49:06.465494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:07.459 [2024-07-25 11:49:06.465505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:07.459 [2024-07-25 11:49:06.465517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.459 [2024-07-25 11:49:06.465528] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:07.459 [2024-07-25 11:49:06.465548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:07.459 [2024-07-25 11:49:06.465570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.459 [2024-07-25 11:49:06.465592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.459 [2024-07-25 11:49:06.465616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:07.459 [2024-07-25 11:49:06.465638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:07.459 [2024-07-25 11:49:06.465659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:07.459 [2024-07-25 11:49:06.465679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:07.459 [2024-07-25 11:49:06.465699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:07.459 [2024-07-25 11:49:06.465717] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:07.459 [2024-07-25 11:49:06.465732] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:07.459 [2024-07-25 11:49:06.465765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.459 [2024-07-25 11:49:06.465785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:07.459 [2024-07-25 11:49:06.465807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:07.459 [2024-07-25 11:49:06.465830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:07.459 [2024-07-25 11:49:06.465856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:07.459 [2024-07-25 11:49:06.465880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:07.459 [2024-07-25 11:49:06.465902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:07.459 [2024-07-25 11:49:06.465943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:07.459 [2024-07-25 11:49:06.465959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:07.459 [2024-07-25 11:49:06.465973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:07.459 [2024-07-25 11:49:06.465992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:07.459 [2024-07-25 11:49:06.466013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:07.459 [2024-07-25 11:49:06.466037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:07.459 [2024-07-25 11:49:06.466062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:07.459 [2024-07-25 11:49:06.466086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:07.459 [2024-07-25 11:49:06.466109] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:07.459 [2024-07-25 11:49:06.466133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.459 [2024-07-25 11:49:06.466157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.459 [2024-07-25 11:49:06.466174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:07.459 [2024-07-25 11:49:06.466196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:07.459 [2024-07-25 11:49:06.466219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:07.459 [2024-07-25 11:49:06.466244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.459 [2024-07-25 11:49:06.466270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:07.459 [2024-07-25 11:49:06.466293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:24:07.459 [2024-07-25 11:49:06.466312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.515438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.515528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.718 [2024-07-25 11:49:06.515567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.034 ms 00:24:07.718 [2024-07-25 11:49:06.515582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.515731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.515750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:07.718 [2024-07-25 11:49:06.515764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:07.718 [2024-07-25 11:49:06.515776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.559469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.559544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.718 [2024-07-25 11:49:06.559580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.559 ms 00:24:07.718 [2024-07-25 11:49:06.559602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.559670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.559689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.718 [2024-07-25 11:49:06.559703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:07.718 [2024-07-25 11:49:06.559720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.560478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.560519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.718 [2024-07-25 11:49:06.560536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:24:07.718 [2024-07-25 11:49:06.560548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.560778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.560823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.718 [2024-07-25 11:49:06.560849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:24:07.718 [2024-07-25 11:49:06.560872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.579088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.579148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.718 [2024-07-25 11:49:06.579182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.144 ms 00:24:07.718 [2024-07-25 11:49:06.579201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.596306] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:07.718 [2024-07-25 11:49:06.596363] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:07.718 [2024-07-25 11:49:06.596383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.596396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:07.718 [2024-07-25 11:49:06.596411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.042 ms 00:24:07.718 [2024-07-25 11:49:06.596422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.625258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.625324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:07.718 [2024-07-25 11:49:06.625360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.782 ms 00:24:07.718 [2024-07-25 11:49:06.625373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.641022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.641081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:07.718 [2024-07-25 11:49:06.641114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.600 ms 00:24:07.718 [2024-07-25 11:49:06.641126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.656442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.656486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:07.718 [2024-07-25 11:49:06.656503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.273 ms 00:24:07.718 [2024-07-25 11:49:06.656515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.718 [2024-07-25 11:49:06.657486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.718 [2024-07-25 11:49:06.657540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:07.718 [2024-07-25 11:49:06.657573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:24:07.718 [2024-07-25 11:49:06.657584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.719 [2024-07-25 11:49:06.733290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.719 [2024-07-25 11:49:06.733388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:07.719 [2024-07-25 11:49:06.733411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.671 ms 00:24:07.719 [2024-07-25 11:49:06.733432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.719 [2024-07-25 11:49:06.746119] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:07.719 [2024-07-25 11:49:06.749744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.719 [2024-07-25 11:49:06.749786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:07.719 [2024-07-25 11:49:06.749803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.236 ms 00:24:07.719 [2024-07-25 11:49:06.749816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.719 [2024-07-25 11:49:06.749962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.719 [2024-07-25 11:49:06.749986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:07.719 [2024-07-25 11:49:06.750001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:07.719 [2024-07-25 11:49:06.750013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.719 [2024-07-25 11:49:06.750159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.719 [2024-07-25 11:49:06.750206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:07.719 [2024-07-25 11:49:06.750234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:07.719 [2024-07-25 11:49:06.750257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.719 [2024-07-25 11:49:06.750319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.719 [2024-07-25 11:49:06.750347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:07.719 [2024-07-25 11:49:06.750363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:07.719 [2024-07-25 11:49:06.750379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.719 [2024-07-25 11:49:06.750451] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:07.719 [2024-07-25 11:49:06.750494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.719 [2024-07-25 11:49:06.750525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:07.719 [2024-07-25 11:49:06.750549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:07.719 [2024-07-25 11:49:06.750571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.977 [2024-07-25 11:49:06.781217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.977 [2024-07-25 11:49:06.781260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:07.977 [2024-07-25 11:49:06.781294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.601 ms 00:24:07.977 [2024-07-25 11:49:06.781315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.977 [2024-07-25 11:49:06.781402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.977 [2024-07-25 11:49:06.781421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:07.977 [2024-07-25 11:49:06.781451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:07.977 [2024-07-25 11:49:06.781466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.977 [2024-07-25 11:49:06.783106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 353.965 ms, result 0 00:24:47.738  Copying: 26/1024 [MB] (26 MBps) Copying: 53/1024 [MB] (26 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 108/1024 [MB] (27 MBps) Copying: 136/1024 [MB] (27 MBps) Copying: 163/1024 [MB] (26 MBps) Copying: 189/1024 [MB] (26 MBps) Copying: 215/1024 [MB] (26 MBps) Copying: 242/1024 [MB] (26 MBps) Copying: 269/1024 [MB] (27 MBps) Copying: 296/1024 [MB] (26 MBps) Copying: 323/1024 [MB] (26 MBps) Copying: 349/1024 [MB] (26 MBps) Copying: 376/1024 [MB] (27 MBps) Copying: 403/1024 [MB] (26 MBps) Copying: 430/1024 [MB] (26 MBps) Copying: 456/1024 [MB] (26 MBps) Copying: 483/1024 [MB] (26 MBps) Copying: 508/1024 [MB] (25 MBps) Copying: 534/1024 [MB] (25 MBps) Copying: 560/1024 [MB] (26 MBps) Copying: 586/1024 [MB] (26 MBps) Copying: 612/1024 [MB] (26 MBps) Copying: 638/1024 [MB] (26 MBps) Copying: 664/1024 [MB] (25 MBps) Copying: 690/1024 [MB] (25 MBps) Copying: 715/1024 [MB] (25 MBps) Copying: 742/1024 [MB] (26 MBps) Copying: 768/1024 [MB] (26 MBps) Copying: 794/1024 [MB] (26 MBps) Copying: 820/1024 [MB] (25 MBps) Copying: 846/1024 [MB] (26 MBps) Copying: 872/1024 [MB] (26 MBps) Copying: 899/1024 [MB] (26 MBps) Copying: 926/1024 [MB] (26 MBps) Copying: 953/1024 [MB] (26 MBps) Copying: 979/1024 [MB] (26 MBps) Copying: 1007/1024 [MB] (27 MBps) Copying: 1023/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-25 11:49:46.516470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.738 [2024-07-25 11:49:46.516540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:47.738 [2024-07-25 11:49:46.516563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:47.738 [2024-07-25 11:49:46.516582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.738 [2024-07-25 11:49:46.517810] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:47.738 [2024-07-25 11:49:46.522599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.738 [2024-07-25 11:49:46.522668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:47.738 [2024-07-25 11:49:46.522685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.752 ms 00:24:47.738 [2024-07-25 11:49:46.522698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.738 [2024-07-25 11:49:46.536080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.738 [2024-07-25 11:49:46.536141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:47.738 [2024-07-25 11:49:46.536161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.985 ms 00:24:47.738 [2024-07-25 11:49:46.536175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.738 [2024-07-25 11:49:46.558910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.738 [2024-07-25 11:49:46.558962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:47.738 [2024-07-25 11:49:46.558980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.702 ms 00:24:47.738 [2024-07-25 11:49:46.558993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.738 [2024-07-25 11:49:46.565549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.738 [2024-07-25 11:49:46.565613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:47.738 [2024-07-25 11:49:46.565629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.515 ms 00:24:47.738 [2024-07-25 11:49:46.565641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.738 [2024-07-25 11:49:46.597408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.738 [2024-07-25 11:49:46.597449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:47.738 [2024-07-25 11:49:46.597468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.690 ms 00:24:47.738 [2024-07-25 11:49:46.597480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.738 [2024-07-25 11:49:46.615571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.738 [2024-07-25 11:49:46.615616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:47.738 [2024-07-25 11:49:46.615635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.048 ms 00:24:47.738 [2024-07-25 11:49:46.615649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.738 [2024-07-25 11:49:46.712039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.738 [2024-07-25 11:49:46.712092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:47.738 [2024-07-25 11:49:46.712112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.342 ms 00:24:47.738 [2024-07-25 11:49:46.712126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.738 [2024-07-25 11:49:46.744046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.738 [2024-07-25 11:49:46.744097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:47.738 [2024-07-25 11:49:46.744116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.896 ms 00:24:47.738 [2024-07-25 11:49:46.744128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.738 [2024-07-25 11:49:46.774926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.738 [2024-07-25 11:49:46.774976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:47.738 [2024-07-25 11:49:46.775009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.753 ms 00:24:47.738 [2024-07-25 11:49:46.775021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.998 [2024-07-25 11:49:46.806044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.998 [2024-07-25 11:49:46.806114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:47.998 [2024-07-25 11:49:46.806147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.981 ms 00:24:47.998 [2024-07-25 11:49:46.806160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.998 [2024-07-25 11:49:46.836439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.998 [2024-07-25 11:49:46.836477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:47.998 [2024-07-25 11:49:46.836493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.158 ms 00:24:47.998 [2024-07-25 11:49:46.836505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.998 [2024-07-25 11:49:46.836548] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:47.998 [2024-07-25 11:49:46.836572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118528 / 261120 wr_cnt: 1 state: open 00:24:47.998 [2024-07-25 11:49:46.836587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.836995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:47.998 [2024-07-25 11:49:46.837347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.837990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.838004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.838017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:47.999 [2024-07-25 11:49:46.838039] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:47.999 [2024-07-25 11:49:46.838052] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1430877-b154-4b5e-893a-2e52e6ce0696 00:24:47.999 [2024-07-25 11:49:46.838065] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118528 00:24:47.999 [2024-07-25 11:49:46.838077] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119488 00:24:47.999 [2024-07-25 11:49:46.838089] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118528 00:24:47.999 [2024-07-25 11:49:46.838108] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:24:47.999 [2024-07-25 11:49:46.838120] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:47.999 [2024-07-25 11:49:46.838132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:47.999 [2024-07-25 11:49:46.838149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:47.999 [2024-07-25 11:49:46.838160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:47.999 [2024-07-25 11:49:46.838171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:47.999 [2024-07-25 11:49:46.838183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.999 [2024-07-25 11:49:46.838195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:47.999 [2024-07-25 11:49:46.838208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:24:47.999 [2024-07-25 11:49:46.838220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.999 [2024-07-25 11:49:46.855664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.999 [2024-07-25 11:49:46.855702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:47.999 [2024-07-25 11:49:46.855733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.401 ms 00:24:47.999 [2024-07-25 11:49:46.855746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.999 [2024-07-25 11:49:46.856244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.999 [2024-07-25 11:49:46.856268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:47.999 [2024-07-25 11:49:46.856292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:24:47.999 [2024-07-25 11:49:46.856305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.999 [2024-07-25 11:49:46.894480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.999 [2024-07-25 11:49:46.894536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.999 [2024-07-25 11:49:46.894556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.999 [2024-07-25 11:49:46.894568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.999 [2024-07-25 11:49:46.894649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.999 [2024-07-25 11:49:46.894680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.999 [2024-07-25 11:49:46.894693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.999 [2024-07-25 11:49:46.894705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.999 [2024-07-25 11:49:46.894779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.999 [2024-07-25 11:49:46.894803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.999 [2024-07-25 11:49:46.894816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.999 [2024-07-25 11:49:46.894834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.999 [2024-07-25 11:49:46.894857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.999 [2024-07-25 11:49:46.894872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.999 [2024-07-25 11:49:46.894883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.999 [2024-07-25 11:49:46.894894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.999 [2024-07-25 11:49:46.995941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.999 [2024-07-25 11:49:46.996028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.999 [2024-07-25 11:49:46.996047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.999 [2024-07-25 11:49:46.996067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.257 [2024-07-25 11:49:47.081645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.257 [2024-07-25 11:49:47.081739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:48.257 [2024-07-25 11:49:47.081760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.257 [2024-07-25 11:49:47.081773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.257 [2024-07-25 11:49:47.081896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.257 [2024-07-25 11:49:47.081931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:48.257 [2024-07-25 11:49:47.081960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.257 [2024-07-25 11:49:47.082015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.257 [2024-07-25 11:49:47.082078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.257 [2024-07-25 11:49:47.082096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:48.257 [2024-07-25 11:49:47.082110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.257 [2024-07-25 11:49:47.082123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.257 [2024-07-25 11:49:47.082248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.257 [2024-07-25 11:49:47.082269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:48.257 [2024-07-25 11:49:47.082282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.257 [2024-07-25 11:49:47.082295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.257 [2024-07-25 11:49:47.082360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.258 [2024-07-25 11:49:47.082399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:48.258 [2024-07-25 11:49:47.082413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.258 [2024-07-25 11:49:47.082425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.258 [2024-07-25 11:49:47.082476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.258 [2024-07-25 11:49:47.082492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:48.258 [2024-07-25 11:49:47.082506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.258 [2024-07-25 11:49:47.082518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.258 [2024-07-25 11:49:47.082615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.258 [2024-07-25 11:49:47.082633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:48.258 [2024-07-25 11:49:47.082647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.258 [2024-07-25 11:49:47.082659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.258 [2024-07-25 11:49:47.082851] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 570.362 ms, result 0 00:24:50.172 00:24:50.172 00:24:50.172 11:49:48 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:50.172 [2024-07-25 11:49:48.839259] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:50.172 [2024-07-25 11:49:48.839437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82151 ] 00:24:50.172 [2024-07-25 11:49:49.005121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.430 [2024-07-25 11:49:49.247915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.759 [2024-07-25 11:49:49.599327] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.759 [2024-07-25 11:49:49.599439] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.759 [2024-07-25 11:49:49.764445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.759 [2024-07-25 11:49:49.764512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:50.759 [2024-07-25 11:49:49.764535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:50.759 [2024-07-25 11:49:49.764547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.759 [2024-07-25 11:49:49.764614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.759 [2024-07-25 11:49:49.764632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.759 [2024-07-25 11:49:49.764644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:50.759 [2024-07-25 11:49:49.764659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.759 [2024-07-25 11:49:49.764694] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:50.759 [2024-07-25 11:49:49.765584] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:50.759 [2024-07-25 11:49:49.765626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.759 [2024-07-25 11:49:49.765640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.759 [2024-07-25 11:49:49.765652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:24:50.759 [2024-07-25 11:49:49.765664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.759 [2024-07-25 11:49:49.767653] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:50.759 [2024-07-25 11:49:49.784544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.759 [2024-07-25 11:49:49.784590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:50.760 [2024-07-25 11:49:49.784616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.892 ms 00:24:50.760 [2024-07-25 11:49:49.784628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.760 [2024-07-25 11:49:49.784706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.760 [2024-07-25 11:49:49.784729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:50.760 [2024-07-25 11:49:49.784742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:50.760 [2024-07-25 11:49:49.784754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.760 [2024-07-25 11:49:49.793618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.760 [2024-07-25 11:49:49.793710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.760 [2024-07-25 11:49:49.793728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.767 ms 00:24:50.760 [2024-07-25 11:49:49.793740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.760 [2024-07-25 11:49:49.793876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.760 [2024-07-25 11:49:49.793896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.760 [2024-07-25 11:49:49.793910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:24:50.760 [2024-07-25 11:49:49.793936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.760 [2024-07-25 11:49:49.794046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.760 [2024-07-25 11:49:49.794065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:50.760 [2024-07-25 11:49:49.794077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:50.760 [2024-07-25 11:49:49.794088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.760 [2024-07-25 11:49:49.794127] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:50.760 [2024-07-25 11:49:49.799356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.760 [2024-07-25 11:49:49.799410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.760 [2024-07-25 11:49:49.799424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.241 ms 00:24:50.760 [2024-07-25 11:49:49.799440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.760 [2024-07-25 11:49:49.799486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.760 [2024-07-25 11:49:49.799502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:50.760 [2024-07-25 11:49:49.799513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:50.760 [2024-07-25 11:49:49.799524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.760 [2024-07-25 11:49:49.799594] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:50.760 [2024-07-25 11:49:49.799655] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:50.760 [2024-07-25 11:49:49.799712] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:50.760 [2024-07-25 11:49:49.799736] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:50.760 [2024-07-25 11:49:49.799842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:50.760 [2024-07-25 11:49:49.799857] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:50.760 [2024-07-25 11:49:49.799873] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:50.760 [2024-07-25 11:49:49.799888] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:50.760 [2024-07-25 11:49:49.799902] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:50.760 [2024-07-25 11:49:49.799915] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:50.760 [2024-07-25 11:49:49.799926] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:50.760 [2024-07-25 11:49:49.799953] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:50.760 [2024-07-25 11:49:49.799966] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:50.760 [2024-07-25 11:49:49.799984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.760 [2024-07-25 11:49:49.799995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:50.760 [2024-07-25 11:49:49.800007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:24:50.760 [2024-07-25 11:49:49.800018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.760 [2024-07-25 11:49:49.800111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.760 [2024-07-25 11:49:49.800126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:50.760 [2024-07-25 11:49:49.800138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:50.760 [2024-07-25 11:49:49.800149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.760 [2024-07-25 11:49:49.800255] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:50.760 [2024-07-25 11:49:49.800288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:50.760 [2024-07-25 11:49:49.800311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.760 [2024-07-25 11:49:49.800323] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:50.760 [2024-07-25 11:49:49.800345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800355] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:50.760 [2024-07-25 11:49:49.800365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:50.760 [2024-07-25 11:49:49.800375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.760 [2024-07-25 11:49:49.800395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:50.760 [2024-07-25 11:49:49.800405] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:50.760 [2024-07-25 11:49:49.800415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.760 [2024-07-25 11:49:49.800425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:50.760 [2024-07-25 11:49:49.800436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:50.760 [2024-07-25 11:49:49.800446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:50.760 [2024-07-25 11:49:49.800466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:50.760 [2024-07-25 11:49:49.800476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:50.760 [2024-07-25 11:49:49.800509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.760 [2024-07-25 11:49:49.800530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:50.760 [2024-07-25 11:49:49.800540] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.760 [2024-07-25 11:49:49.800560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:50.760 [2024-07-25 11:49:49.800574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800585] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.760 [2024-07-25 11:49:49.800596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:50.760 [2024-07-25 11:49:49.800606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.760 [2024-07-25 11:49:49.800626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:50.760 [2024-07-25 11:49:49.800644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.760 [2024-07-25 11:49:49.800665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:50.760 [2024-07-25 11:49:49.800675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:50.760 [2024-07-25 11:49:49.800685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.760 [2024-07-25 11:49:49.800696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:50.760 [2024-07-25 11:49:49.800706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:50.760 [2024-07-25 11:49:49.800717] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:50.760 [2024-07-25 11:49:49.800739] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:50.760 [2024-07-25 11:49:49.800749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800759] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:50.760 [2024-07-25 11:49:49.800771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:50.760 [2024-07-25 11:49:49.800792] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.760 [2024-07-25 11:49:49.800803] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.760 [2024-07-25 11:49:49.800823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:50.760 [2024-07-25 11:49:49.800834] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:50.760 [2024-07-25 11:49:49.800844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:50.760 [2024-07-25 11:49:49.800855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:50.760 [2024-07-25 11:49:49.800865] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:50.760 [2024-07-25 11:49:49.800876] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:50.760 [2024-07-25 11:49:49.800889] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:50.760 [2024-07-25 11:49:49.800903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.760 [2024-07-25 11:49:49.800916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:50.761 [2024-07-25 11:49:49.800943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:50.761 [2024-07-25 11:49:49.800956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:50.761 [2024-07-25 11:49:49.800969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:50.761 [2024-07-25 11:49:49.800981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:50.761 [2024-07-25 11:49:49.800992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:50.761 [2024-07-25 11:49:49.801004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:50.761 [2024-07-25 11:49:49.801016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:50.761 [2024-07-25 11:49:49.801027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:50.761 [2024-07-25 11:49:49.801039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:50.761 [2024-07-25 11:49:49.801050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:50.761 [2024-07-25 11:49:49.801062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:50.761 [2024-07-25 11:49:49.801073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:50.761 [2024-07-25 11:49:49.801085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:50.761 [2024-07-25 11:49:49.801096] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:50.761 [2024-07-25 11:49:49.801114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.761 [2024-07-25 11:49:49.801127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:50.761 [2024-07-25 11:49:49.801138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:50.761 [2024-07-25 11:49:49.801150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:50.761 [2024-07-25 11:49:49.801161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:50.761 [2024-07-25 11:49:49.801174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.761 [2024-07-25 11:49:49.801186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:50.761 [2024-07-25 11:49:49.801198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:24:50.761 [2024-07-25 11:49:49.801209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.850651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.850743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.027 [2024-07-25 11:49:49.850765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.373 ms 00:24:51.027 [2024-07-25 11:49:49.850778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.850936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.850955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.027 [2024-07-25 11:49:49.850969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:51.027 [2024-07-25 11:49:49.850980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.891719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.891805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.027 [2024-07-25 11:49:49.891825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.610 ms 00:24:51.027 [2024-07-25 11:49:49.891837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.891935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.891953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.027 [2024-07-25 11:49:49.891966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:51.027 [2024-07-25 11:49:49.891982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.892700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.892733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.027 [2024-07-25 11:49:49.892748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:24:51.027 [2024-07-25 11:49:49.892760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.892997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.893019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.027 [2024-07-25 11:49:49.893032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:24:51.027 [2024-07-25 11:49:49.893048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.910359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.910417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.027 [2024-07-25 11:49:49.910438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.283 ms 00:24:51.027 [2024-07-25 11:49:49.910450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.926301] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:51.027 [2024-07-25 11:49:49.926360] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:51.027 [2024-07-25 11:49:49.926380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.926393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:51.027 [2024-07-25 11:49:49.926421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.794 ms 00:24:51.027 [2024-07-25 11:49:49.926432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.955350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.955404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:51.027 [2024-07-25 11:49:49.955421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.869 ms 00:24:51.027 [2024-07-25 11:49:49.955434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.971049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.971094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:51.027 [2024-07-25 11:49:49.971126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.567 ms 00:24:51.027 [2024-07-25 11:49:49.971138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.986510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.986567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:51.027 [2024-07-25 11:49:49.986582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.330 ms 00:24:51.027 [2024-07-25 11:49:49.986593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:49.987565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:49.987617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.027 [2024-07-25 11:49:49.987649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:24:51.027 [2024-07-25 11:49:49.987665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:50.062845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:50.062948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:51.027 [2024-07-25 11:49:50.062978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.153 ms 00:24:51.027 [2024-07-25 11:49:50.062991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.027 [2024-07-25 11:49:50.074786] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:51.027 [2024-07-25 11:49:50.077498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.027 [2024-07-25 11:49:50.077555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:51.027 [2024-07-25 11:49:50.077572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.439 ms 00:24:51.027 [2024-07-25 11:49:50.077584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.285 [2024-07-25 11:49:50.077726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.285 [2024-07-25 11:49:50.077746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:51.285 [2024-07-25 11:49:50.077760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:51.285 [2024-07-25 11:49:50.077791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.285 [2024-07-25 11:49:50.079660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.285 [2024-07-25 11:49:50.079713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:51.285 [2024-07-25 11:49:50.079728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.812 ms 00:24:51.285 [2024-07-25 11:49:50.079739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.285 [2024-07-25 11:49:50.079773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.286 [2024-07-25 11:49:50.079789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:51.286 [2024-07-25 11:49:50.079801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:51.286 [2024-07-25 11:49:50.079812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.286 [2024-07-25 11:49:50.079854] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:51.286 [2024-07-25 11:49:50.079873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.286 [2024-07-25 11:49:50.079884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:51.286 [2024-07-25 11:49:50.079895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:51.286 [2024-07-25 11:49:50.079904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.286 [2024-07-25 11:49:50.109388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.286 [2024-07-25 11:49:50.109446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:51.286 [2024-07-25 11:49:50.109470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.449 ms 00:24:51.286 [2024-07-25 11:49:50.109485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.286 [2024-07-25 11:49:50.109568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.286 [2024-07-25 11:49:50.109588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:51.286 [2024-07-25 11:49:50.109600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:51.286 [2024-07-25 11:49:50.109611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.286 [2024-07-25 11:49:50.117738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.386 ms, result 0 00:25:30.654  Copying: 23/1024 [MB] (23 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (25 MBps) Copying: 99/1024 [MB] (26 MBps) Copying: 126/1024 [MB] (26 MBps) Copying: 152/1024 [MB] (26 MBps) Copying: 179/1024 [MB] (26 MBps) Copying: 204/1024 [MB] (25 MBps) Copying: 231/1024 [MB] (26 MBps) Copying: 258/1024 [MB] (26 MBps) Copying: 285/1024 [MB] (26 MBps) Copying: 311/1024 [MB] (26 MBps) Copying: 337/1024 [MB] (25 MBps) Copying: 363/1024 [MB] (25 MBps) Copying: 389/1024 [MB] (25 MBps) Copying: 415/1024 [MB] (25 MBps) Copying: 441/1024 [MB] (26 MBps) Copying: 467/1024 [MB] (26 MBps) Copying: 492/1024 [MB] (25 MBps) Copying: 518/1024 [MB] (25 MBps) Copying: 544/1024 [MB] (25 MBps) Copying: 570/1024 [MB] (26 MBps) Copying: 597/1024 [MB] (26 MBps) Copying: 623/1024 [MB] (26 MBps) Copying: 650/1024 [MB] (26 MBps) Copying: 677/1024 [MB] (26 MBps) Copying: 704/1024 [MB] (26 MBps) Copying: 731/1024 [MB] (27 MBps) Copying: 757/1024 [MB] (26 MBps) Copying: 784/1024 [MB] (26 MBps) Copying: 809/1024 [MB] (25 MBps) Copying: 835/1024 [MB] (25 MBps) Copying: 862/1024 [MB] (26 MBps) Copying: 888/1024 [MB] (26 MBps) Copying: 914/1024 [MB] (25 MBps) Copying: 940/1024 [MB] (26 MBps) Copying: 966/1024 [MB] (25 MBps) Copying: 990/1024 [MB] (24 MBps) Copying: 1016/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-25 11:50:29.646828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.654 [2024-07-25 11:50:29.646936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:30.654 [2024-07-25 11:50:29.646962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:30.654 [2024-07-25 11:50:29.646983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.654 [2024-07-25 11:50:29.647016] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:30.654 [2024-07-25 11:50:29.651334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.654 [2024-07-25 11:50:29.651385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:30.654 [2024-07-25 11:50:29.651415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.293 ms 00:25:30.654 [2024-07-25 11:50:29.651427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.654 [2024-07-25 11:50:29.651695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.654 [2024-07-25 11:50:29.651723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:30.654 [2024-07-25 11:50:29.651746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:25:30.654 [2024-07-25 11:50:29.651757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.654 [2024-07-25 11:50:29.657194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.654 [2024-07-25 11:50:29.657239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:30.654 [2024-07-25 11:50:29.657255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.402 ms 00:25:30.654 [2024-07-25 11:50:29.657267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.654 [2024-07-25 11:50:29.663972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.654 [2024-07-25 11:50:29.664058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:30.654 [2024-07-25 11:50:29.664086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.649 ms 00:25:30.654 [2024-07-25 11:50:29.664096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.654 [2024-07-25 11:50:29.696788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.654 [2024-07-25 11:50:29.696862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:30.654 [2024-07-25 11:50:29.696895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.638 ms 00:25:30.654 [2024-07-25 11:50:29.696906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.913 [2024-07-25 11:50:29.715491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.913 [2024-07-25 11:50:29.715554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:30.913 [2024-07-25 11:50:29.715585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.533 ms 00:25:30.913 [2024-07-25 11:50:29.715597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.913 [2024-07-25 11:50:29.836324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.913 [2024-07-25 11:50:29.836447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:30.913 [2024-07-25 11:50:29.836472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.670 ms 00:25:30.913 [2024-07-25 11:50:29.836484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.913 [2024-07-25 11:50:29.871363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.913 [2024-07-25 11:50:29.871426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:30.913 [2024-07-25 11:50:29.871459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.839 ms 00:25:30.913 [2024-07-25 11:50:29.871470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.913 [2024-07-25 11:50:29.902383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.913 [2024-07-25 11:50:29.902473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:30.913 [2024-07-25 11:50:29.902508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.852 ms 00:25:30.913 [2024-07-25 11:50:29.902518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.913 [2024-07-25 11:50:29.933392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.913 [2024-07-25 11:50:29.933467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:30.913 [2024-07-25 11:50:29.933486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.823 ms 00:25:30.913 [2024-07-25 11:50:29.933512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.913 [2024-07-25 11:50:29.963483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.913 [2024-07-25 11:50:29.963556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:30.913 [2024-07-25 11:50:29.963573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.869 ms 00:25:30.913 [2024-07-25 11:50:29.963583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.913 [2024-07-25 11:50:29.963624] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:30.913 [2024-07-25 11:50:29.963647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133376 / 261120 wr_cnt: 1 state: open 00:25:30.913 [2024-07-25 11:50:29.963661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:30.913 [2024-07-25 11:50:29.963672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:30.913 [2024-07-25 11:50:29.963683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:31.172 [2024-07-25 11:50:29.963693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:31.172 [2024-07-25 11:50:29.963704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:31.172 [2024-07-25 11:50:29.963715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.963990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:31.173 [2024-07-25 11:50:29.964931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:31.174 [2024-07-25 11:50:29.964959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:31.174 [2024-07-25 11:50:29.964973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:31.174 [2024-07-25 11:50:29.964985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:31.174 [2024-07-25 11:50:29.965012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:31.174 [2024-07-25 11:50:29.965024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:31.174 [2024-07-25 11:50:29.965045] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:31.174 [2024-07-25 11:50:29.965056] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1430877-b154-4b5e-893a-2e52e6ce0696 00:25:31.174 [2024-07-25 11:50:29.965069] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133376 00:25:31.174 [2024-07-25 11:50:29.965095] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15808 00:25:31.174 [2024-07-25 11:50:29.965144] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14848 00:25:31.174 [2024-07-25 11:50:29.965157] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0647 00:25:31.174 [2024-07-25 11:50:29.965168] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:31.174 [2024-07-25 11:50:29.965183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:31.174 [2024-07-25 11:50:29.965194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:31.174 [2024-07-25 11:50:29.965204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:31.174 [2024-07-25 11:50:29.965214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:31.174 [2024-07-25 11:50:29.965225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.174 [2024-07-25 11:50:29.965236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:31.174 [2024-07-25 11:50:29.965248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:25:31.174 [2024-07-25 11:50:29.965258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.174 [2024-07-25 11:50:29.982380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.174 [2024-07-25 11:50:29.982436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:31.174 [2024-07-25 11:50:29.982451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.055 ms 00:25:31.174 [2024-07-25 11:50:29.982480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.174 [2024-07-25 11:50:29.983066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.174 [2024-07-25 11:50:29.983090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:31.174 [2024-07-25 11:50:29.983105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:25:31.174 [2024-07-25 11:50:29.983116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.174 [2024-07-25 11:50:30.021696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.174 [2024-07-25 11:50:30.021791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.174 [2024-07-25 11:50:30.021808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.174 [2024-07-25 11:50:30.021819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.174 [2024-07-25 11:50:30.021891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.174 [2024-07-25 11:50:30.021904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.174 [2024-07-25 11:50:30.021915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.174 [2024-07-25 11:50:30.021946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.174 [2024-07-25 11:50:30.022084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.174 [2024-07-25 11:50:30.022102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.174 [2024-07-25 11:50:30.022153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.174 [2024-07-25 11:50:30.022164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.174 [2024-07-25 11:50:30.022188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.174 [2024-07-25 11:50:30.022201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.174 [2024-07-25 11:50:30.022213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.174 [2024-07-25 11:50:30.022224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.174 [2024-07-25 11:50:30.133563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.174 [2024-07-25 11:50:30.133680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.174 [2024-07-25 11:50:30.133707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.174 [2024-07-25 11:50:30.133720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.433 [2024-07-25 11:50:30.222822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.433 [2024-07-25 11:50:30.222894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.433 [2024-07-25 11:50:30.222914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.433 [2024-07-25 11:50:30.222942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.433 [2024-07-25 11:50:30.223073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.433 [2024-07-25 11:50:30.223092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.433 [2024-07-25 11:50:30.223104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.433 [2024-07-25 11:50:30.223116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.433 [2024-07-25 11:50:30.223184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.433 [2024-07-25 11:50:30.223200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.433 [2024-07-25 11:50:30.223212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.433 [2024-07-25 11:50:30.223223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.433 [2024-07-25 11:50:30.223359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.433 [2024-07-25 11:50:30.223379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.433 [2024-07-25 11:50:30.223393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.433 [2024-07-25 11:50:30.223404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.433 [2024-07-25 11:50:30.223466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.433 [2024-07-25 11:50:30.223485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:31.433 [2024-07-25 11:50:30.223498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.433 [2024-07-25 11:50:30.223509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.433 [2024-07-25 11:50:30.223559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.433 [2024-07-25 11:50:30.223574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.433 [2024-07-25 11:50:30.223586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.433 [2024-07-25 11:50:30.223597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.433 [2024-07-25 11:50:30.223660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.433 [2024-07-25 11:50:30.223677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.433 [2024-07-25 11:50:30.223689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.433 [2024-07-25 11:50:30.223700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.433 [2024-07-25 11:50:30.223862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 576.992 ms, result 0 00:25:32.369 00:25:32.369 00:25:32.627 11:50:31 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:35.156 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:35.156 11:50:33 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:35.156 11:50:33 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:35.156 11:50:33 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:35.156 11:50:33 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:35.156 11:50:33 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:35.156 11:50:33 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80598 00:25:35.156 11:50:33 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80598 ']' 00:25:35.156 11:50:33 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80598 00:25:35.156 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80598) - No such process 00:25:35.156 Process with pid 80598 is not found 00:25:35.156 11:50:33 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 80598 is not found' 00:25:35.156 Remove shared memory files 00:25:35.157 11:50:33 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:35.157 11:50:33 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:35.157 11:50:33 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:35.157 11:50:33 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:35.157 11:50:33 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:35.157 11:50:33 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:35.157 11:50:33 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:35.157 ************************************ 00:25:35.157 END TEST ftl_restore 00:25:35.157 ************************************ 00:25:35.157 00:25:35.157 real 3m14.949s 00:25:35.157 user 3m0.905s 00:25:35.157 sys 0m16.481s 00:25:35.157 11:50:33 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:35.157 11:50:33 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:35.157 11:50:33 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:35.157 11:50:33 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:35.157 11:50:33 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:35.157 11:50:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:35.157 ************************************ 00:25:35.157 START TEST ftl_dirty_shutdown 00:25:35.157 ************************************ 00:25:35.157 11:50:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:35.157 * Looking for test storage... 00:25:35.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82655 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82655 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82655 ']' 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:35.157 11:50:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:35.157 [2024-07-25 11:50:34.186831] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:35.157 [2024-07-25 11:50:34.187212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82655 ] 00:25:35.415 [2024-07-25 11:50:34.364549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.672 [2024-07-25 11:50:34.580353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.605 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:36.605 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:36.605 11:50:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:36.605 11:50:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:36.605 11:50:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:36.605 11:50:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:36.605 11:50:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:36.605 11:50:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:36.863 11:50:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:36.863 11:50:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:36.863 11:50:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:36.863 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:25:36.863 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:36.863 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:36.863 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:36.863 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:37.122 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:37.122 { 00:25:37.122 "name": "nvme0n1", 00:25:37.122 "aliases": [ 00:25:37.122 "7e7783f3-21d4-4b04-8f3c-e59b802fd978" 00:25:37.122 ], 00:25:37.122 "product_name": "NVMe disk", 00:25:37.122 "block_size": 4096, 00:25:37.122 "num_blocks": 1310720, 00:25:37.122 "uuid": "7e7783f3-21d4-4b04-8f3c-e59b802fd978", 00:25:37.122 "assigned_rate_limits": { 00:25:37.122 "rw_ios_per_sec": 0, 00:25:37.122 "rw_mbytes_per_sec": 0, 00:25:37.122 "r_mbytes_per_sec": 0, 00:25:37.122 "w_mbytes_per_sec": 0 00:25:37.122 }, 00:25:37.122 "claimed": true, 00:25:37.122 "claim_type": "read_many_write_one", 00:25:37.122 "zoned": false, 00:25:37.122 "supported_io_types": { 00:25:37.122 "read": true, 00:25:37.122 "write": true, 00:25:37.122 "unmap": true, 00:25:37.122 "flush": true, 00:25:37.122 "reset": true, 00:25:37.122 "nvme_admin": true, 00:25:37.122 "nvme_io": true, 00:25:37.122 "nvme_io_md": false, 00:25:37.122 "write_zeroes": true, 00:25:37.122 "zcopy": false, 00:25:37.122 "get_zone_info": false, 00:25:37.122 "zone_management": false, 00:25:37.122 "zone_append": false, 00:25:37.122 "compare": true, 00:25:37.122 "compare_and_write": false, 00:25:37.122 "abort": true, 00:25:37.122 "seek_hole": false, 00:25:37.122 "seek_data": false, 00:25:37.122 "copy": true, 00:25:37.122 "nvme_iov_md": false 00:25:37.122 }, 00:25:37.122 "driver_specific": { 00:25:37.122 "nvme": [ 00:25:37.122 { 00:25:37.122 "pci_address": "0000:00:11.0", 00:25:37.122 "trid": { 00:25:37.122 "trtype": "PCIe", 00:25:37.122 "traddr": "0000:00:11.0" 00:25:37.122 }, 00:25:37.122 "ctrlr_data": { 00:25:37.122 "cntlid": 0, 00:25:37.122 "vendor_id": "0x1b36", 00:25:37.122 "model_number": "QEMU NVMe Ctrl", 00:25:37.122 "serial_number": "12341", 00:25:37.122 "firmware_revision": "8.0.0", 00:25:37.122 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:37.122 "oacs": { 00:25:37.122 "security": 0, 00:25:37.122 "format": 1, 00:25:37.122 "firmware": 0, 00:25:37.122 "ns_manage": 1 00:25:37.122 }, 00:25:37.122 "multi_ctrlr": false, 00:25:37.122 "ana_reporting": false 00:25:37.122 }, 00:25:37.122 "vs": { 00:25:37.122 "nvme_version": "1.4" 00:25:37.122 }, 00:25:37.122 "ns_data": { 00:25:37.122 "id": 1, 00:25:37.122 "can_share": false 00:25:37.122 } 00:25:37.122 } 00:25:37.122 ], 00:25:37.122 "mp_policy": "active_passive" 00:25:37.122 } 00:25:37.122 } 00:25:37.122 ]' 00:25:37.122 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:37.122 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:37.122 11:50:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:37.122 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:37.122 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:37.122 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:37.122 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:37.122 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:37.122 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:37.122 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:37.122 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:37.380 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=5ed8456d-e21d-4baa-8b7c-2aee8b1f8c77 00:25:37.380 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:37.380 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ed8456d-e21d-4baa-8b7c-2aee8b1f8c77 00:25:37.638 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:37.896 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=67c7a327-3336-4479-93a7-f8ddca5f40bf 00:25:37.896 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 67c7a327-3336-4479-93a7-f8ddca5f40bf 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=2648255e-c6d9-482c-85de-91007fd023a2 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2648255e-c6d9-482c-85de-91007fd023a2 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=2648255e-c6d9-482c-85de-91007fd023a2 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2648255e-c6d9-482c-85de-91007fd023a2 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2648255e-c6d9-482c-85de-91007fd023a2 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:38.155 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2648255e-c6d9-482c-85de-91007fd023a2 00:25:38.413 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:38.413 { 00:25:38.413 "name": "2648255e-c6d9-482c-85de-91007fd023a2", 00:25:38.413 "aliases": [ 00:25:38.413 "lvs/nvme0n1p0" 00:25:38.413 ], 00:25:38.413 "product_name": "Logical Volume", 00:25:38.413 "block_size": 4096, 00:25:38.413 "num_blocks": 26476544, 00:25:38.413 "uuid": "2648255e-c6d9-482c-85de-91007fd023a2", 00:25:38.413 "assigned_rate_limits": { 00:25:38.413 "rw_ios_per_sec": 0, 00:25:38.413 "rw_mbytes_per_sec": 0, 00:25:38.413 "r_mbytes_per_sec": 0, 00:25:38.413 "w_mbytes_per_sec": 0 00:25:38.413 }, 00:25:38.413 "claimed": false, 00:25:38.413 "zoned": false, 00:25:38.413 "supported_io_types": { 00:25:38.413 "read": true, 00:25:38.413 "write": true, 00:25:38.413 "unmap": true, 00:25:38.413 "flush": false, 00:25:38.413 "reset": true, 00:25:38.413 "nvme_admin": false, 00:25:38.413 "nvme_io": false, 00:25:38.413 "nvme_io_md": false, 00:25:38.413 "write_zeroes": true, 00:25:38.413 "zcopy": false, 00:25:38.413 "get_zone_info": false, 00:25:38.413 "zone_management": false, 00:25:38.413 "zone_append": false, 00:25:38.413 "compare": false, 00:25:38.413 "compare_and_write": false, 00:25:38.413 "abort": false, 00:25:38.413 "seek_hole": true, 00:25:38.413 "seek_data": true, 00:25:38.413 "copy": false, 00:25:38.413 "nvme_iov_md": false 00:25:38.413 }, 00:25:38.413 "driver_specific": { 00:25:38.413 "lvol": { 00:25:38.413 "lvol_store_uuid": "67c7a327-3336-4479-93a7-f8ddca5f40bf", 00:25:38.413 "base_bdev": "nvme0n1", 00:25:38.413 "thin_provision": true, 00:25:38.413 "num_allocated_clusters": 0, 00:25:38.413 "snapshot": false, 00:25:38.413 "clone": false, 00:25:38.413 "esnap_clone": false 00:25:38.413 } 00:25:38.413 } 00:25:38.413 } 00:25:38.413 ]' 00:25:38.413 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:38.413 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:38.413 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:38.672 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:38.672 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:38.672 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:38.672 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:38.672 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:38.672 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:38.930 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:38.930 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:38.930 11:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 2648255e-c6d9-482c-85de-91007fd023a2 00:25:38.930 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2648255e-c6d9-482c-85de-91007fd023a2 00:25:38.930 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:38.930 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:38.930 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:38.930 11:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2648255e-c6d9-482c-85de-91007fd023a2 00:25:39.189 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:39.189 { 00:25:39.189 "name": "2648255e-c6d9-482c-85de-91007fd023a2", 00:25:39.189 "aliases": [ 00:25:39.189 "lvs/nvme0n1p0" 00:25:39.189 ], 00:25:39.189 "product_name": "Logical Volume", 00:25:39.189 "block_size": 4096, 00:25:39.189 "num_blocks": 26476544, 00:25:39.189 "uuid": "2648255e-c6d9-482c-85de-91007fd023a2", 00:25:39.189 "assigned_rate_limits": { 00:25:39.189 "rw_ios_per_sec": 0, 00:25:39.189 "rw_mbytes_per_sec": 0, 00:25:39.189 "r_mbytes_per_sec": 0, 00:25:39.189 "w_mbytes_per_sec": 0 00:25:39.189 }, 00:25:39.189 "claimed": false, 00:25:39.189 "zoned": false, 00:25:39.189 "supported_io_types": { 00:25:39.189 "read": true, 00:25:39.189 "write": true, 00:25:39.189 "unmap": true, 00:25:39.189 "flush": false, 00:25:39.189 "reset": true, 00:25:39.189 "nvme_admin": false, 00:25:39.189 "nvme_io": false, 00:25:39.189 "nvme_io_md": false, 00:25:39.189 "write_zeroes": true, 00:25:39.189 "zcopy": false, 00:25:39.189 "get_zone_info": false, 00:25:39.189 "zone_management": false, 00:25:39.189 "zone_append": false, 00:25:39.189 "compare": false, 00:25:39.189 "compare_and_write": false, 00:25:39.189 "abort": false, 00:25:39.189 "seek_hole": true, 00:25:39.189 "seek_data": true, 00:25:39.189 "copy": false, 00:25:39.189 "nvme_iov_md": false 00:25:39.189 }, 00:25:39.189 "driver_specific": { 00:25:39.189 "lvol": { 00:25:39.189 "lvol_store_uuid": "67c7a327-3336-4479-93a7-f8ddca5f40bf", 00:25:39.189 "base_bdev": "nvme0n1", 00:25:39.189 "thin_provision": true, 00:25:39.189 "num_allocated_clusters": 0, 00:25:39.189 "snapshot": false, 00:25:39.189 "clone": false, 00:25:39.189 "esnap_clone": false 00:25:39.189 } 00:25:39.189 } 00:25:39.189 } 00:25:39.189 ]' 00:25:39.189 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:39.189 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:39.189 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:39.189 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:39.189 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:39.189 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:39.189 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:39.190 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:39.448 11:50:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:39.448 11:50:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 2648255e-c6d9-482c-85de-91007fd023a2 00:25:39.448 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2648255e-c6d9-482c-85de-91007fd023a2 00:25:39.448 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:39.448 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:39.448 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:39.448 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2648255e-c6d9-482c-85de-91007fd023a2 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:39.707 { 00:25:39.707 "name": "2648255e-c6d9-482c-85de-91007fd023a2", 00:25:39.707 "aliases": [ 00:25:39.707 "lvs/nvme0n1p0" 00:25:39.707 ], 00:25:39.707 "product_name": "Logical Volume", 00:25:39.707 "block_size": 4096, 00:25:39.707 "num_blocks": 26476544, 00:25:39.707 "uuid": "2648255e-c6d9-482c-85de-91007fd023a2", 00:25:39.707 "assigned_rate_limits": { 00:25:39.707 "rw_ios_per_sec": 0, 00:25:39.707 "rw_mbytes_per_sec": 0, 00:25:39.707 "r_mbytes_per_sec": 0, 00:25:39.707 "w_mbytes_per_sec": 0 00:25:39.707 }, 00:25:39.707 "claimed": false, 00:25:39.707 "zoned": false, 00:25:39.707 "supported_io_types": { 00:25:39.707 "read": true, 00:25:39.707 "write": true, 00:25:39.707 "unmap": true, 00:25:39.707 "flush": false, 00:25:39.707 "reset": true, 00:25:39.707 "nvme_admin": false, 00:25:39.707 "nvme_io": false, 00:25:39.707 "nvme_io_md": false, 00:25:39.707 "write_zeroes": true, 00:25:39.707 "zcopy": false, 00:25:39.707 "get_zone_info": false, 00:25:39.707 "zone_management": false, 00:25:39.707 "zone_append": false, 00:25:39.707 "compare": false, 00:25:39.707 "compare_and_write": false, 00:25:39.707 "abort": false, 00:25:39.707 "seek_hole": true, 00:25:39.707 "seek_data": true, 00:25:39.707 "copy": false, 00:25:39.707 "nvme_iov_md": false 00:25:39.707 }, 00:25:39.707 "driver_specific": { 00:25:39.707 "lvol": { 00:25:39.707 "lvol_store_uuid": "67c7a327-3336-4479-93a7-f8ddca5f40bf", 00:25:39.707 "base_bdev": "nvme0n1", 00:25:39.707 "thin_provision": true, 00:25:39.707 "num_allocated_clusters": 0, 00:25:39.707 "snapshot": false, 00:25:39.707 "clone": false, 00:25:39.707 "esnap_clone": false 00:25:39.707 } 00:25:39.707 } 00:25:39.707 } 00:25:39.707 ]' 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2648255e-c6d9-482c-85de-91007fd023a2 --l2p_dram_limit 10' 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:39.707 11:50:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2648255e-c6d9-482c-85de-91007fd023a2 --l2p_dram_limit 10 -c nvc0n1p0 00:25:39.967 [2024-07-25 11:50:38.909221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.909305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:39.967 [2024-07-25 11:50:38.909353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:39.967 [2024-07-25 11:50:38.909368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.909462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.909483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:39.967 [2024-07-25 11:50:38.909497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:39.967 [2024-07-25 11:50:38.909510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.909539] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:39.967 [2024-07-25 11:50:38.910640] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:39.967 [2024-07-25 11:50:38.910674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.910694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:39.967 [2024-07-25 11:50:38.910708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:25:39.967 [2024-07-25 11:50:38.910722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.910879] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d133523e-44ee-4c5c-bfeb-b26f2a26ec1d 00:25:39.967 [2024-07-25 11:50:38.913107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.913145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:39.967 [2024-07-25 11:50:38.913164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:39.967 [2024-07-25 11:50:38.913175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.923079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.923124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:39.967 [2024-07-25 11:50:38.923161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.821 ms 00:25:39.967 [2024-07-25 11:50:38.923173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.923307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.923342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:39.967 [2024-07-25 11:50:38.923372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:39.967 [2024-07-25 11:50:38.923384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.923475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.923492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:39.967 [2024-07-25 11:50:38.923511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:39.967 [2024-07-25 11:50:38.923522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.923560] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:39.967 [2024-07-25 11:50:38.929061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.929120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:39.967 [2024-07-25 11:50:38.929135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.516 ms 00:25:39.967 [2024-07-25 11:50:38.929148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.929205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.929223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:39.967 [2024-07-25 11:50:38.929235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:39.967 [2024-07-25 11:50:38.929247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.929294] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:39.967 [2024-07-25 11:50:38.929461] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:39.967 [2024-07-25 11:50:38.929479] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:39.967 [2024-07-25 11:50:38.929499] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:39.967 [2024-07-25 11:50:38.929514] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:39.967 [2024-07-25 11:50:38.929529] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:39.967 [2024-07-25 11:50:38.929542] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:39.967 [2024-07-25 11:50:38.929558] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:39.967 [2024-07-25 11:50:38.929569] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:39.967 [2024-07-25 11:50:38.929581] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:39.967 [2024-07-25 11:50:38.929592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.929604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:39.967 [2024-07-25 11:50:38.929615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:25:39.967 [2024-07-25 11:50:38.929628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.929711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.967 [2024-07-25 11:50:38.929726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:39.967 [2024-07-25 11:50:38.929738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:39.967 [2024-07-25 11:50:38.929753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.967 [2024-07-25 11:50:38.929860] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:39.967 [2024-07-25 11:50:38.929881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:39.967 [2024-07-25 11:50:38.929904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:39.967 [2024-07-25 11:50:38.929918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.967 [2024-07-25 11:50:38.929929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:39.967 [2024-07-25 11:50:38.929980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:39.967 [2024-07-25 11:50:38.929993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:39.967 [2024-07-25 11:50:38.930005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:39.967 [2024-07-25 11:50:38.930015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:39.967 [2024-07-25 11:50:38.930027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:39.967 [2024-07-25 11:50:38.930037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:39.967 [2024-07-25 11:50:38.930051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:39.967 [2024-07-25 11:50:38.930078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:39.967 [2024-07-25 11:50:38.930090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:39.967 [2024-07-25 11:50:38.930101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:39.967 [2024-07-25 11:50:38.930114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.967 [2024-07-25 11:50:38.930124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:39.967 [2024-07-25 11:50:38.930141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:39.967 [2024-07-25 11:50:38.930153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.967 [2024-07-25 11:50:38.930166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:39.967 [2024-07-25 11:50:38.930177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:39.967 [2024-07-25 11:50:38.930190] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.967 [2024-07-25 11:50:38.930200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:39.967 [2024-07-25 11:50:38.930212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:39.967 [2024-07-25 11:50:38.930222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.967 [2024-07-25 11:50:38.930234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:39.967 [2024-07-25 11:50:38.930243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:39.967 [2024-07-25 11:50:38.930255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.968 [2024-07-25 11:50:38.930298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:39.968 [2024-07-25 11:50:38.930311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:39.968 [2024-07-25 11:50:38.930321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.968 [2024-07-25 11:50:38.930339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:39.968 [2024-07-25 11:50:38.930350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:39.968 [2024-07-25 11:50:38.930365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:39.968 [2024-07-25 11:50:38.930391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:39.968 [2024-07-25 11:50:38.930404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:39.968 [2024-07-25 11:50:38.930414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:39.968 [2024-07-25 11:50:38.930430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:39.968 [2024-07-25 11:50:38.930441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:39.968 [2024-07-25 11:50:38.930454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.968 [2024-07-25 11:50:38.930464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:39.968 [2024-07-25 11:50:38.930477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:39.968 [2024-07-25 11:50:38.930487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.968 [2024-07-25 11:50:38.930500] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:39.968 [2024-07-25 11:50:38.930511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:39.968 [2024-07-25 11:50:38.930525] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:39.968 [2024-07-25 11:50:38.930536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.968 [2024-07-25 11:50:38.930550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:39.968 [2024-07-25 11:50:38.930560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:39.968 [2024-07-25 11:50:38.930576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:39.968 [2024-07-25 11:50:38.930588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:39.968 [2024-07-25 11:50:38.930601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:39.968 [2024-07-25 11:50:38.930612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:39.968 [2024-07-25 11:50:38.930631] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:39.968 [2024-07-25 11:50:38.930656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:39.968 [2024-07-25 11:50:38.930671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:39.968 [2024-07-25 11:50:38.930687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:39.968 [2024-07-25 11:50:38.930701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:39.968 [2024-07-25 11:50:38.930712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:39.968 [2024-07-25 11:50:38.930726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:39.968 [2024-07-25 11:50:38.930751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:39.968 [2024-07-25 11:50:38.930782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:39.968 [2024-07-25 11:50:38.930793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:39.968 [2024-07-25 11:50:38.930807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:39.968 [2024-07-25 11:50:38.930817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:39.968 [2024-07-25 11:50:38.930833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:39.968 [2024-07-25 11:50:38.930846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:39.968 [2024-07-25 11:50:38.930860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:39.968 [2024-07-25 11:50:38.930871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:39.968 [2024-07-25 11:50:38.930885] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:39.968 [2024-07-25 11:50:38.930897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:39.968 [2024-07-25 11:50:38.930911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:39.968 [2024-07-25 11:50:38.930930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:39.968 [2024-07-25 11:50:38.930944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:39.968 [2024-07-25 11:50:38.930956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:39.968 [2024-07-25 11:50:38.930971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.968 [2024-07-25 11:50:38.930982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:39.968 [2024-07-25 11:50:38.931013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:25:39.968 [2024-07-25 11:50:38.931024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.968 [2024-07-25 11:50:38.931101] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:39.968 [2024-07-25 11:50:38.931119] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:42.496 [2024-07-25 11:50:41.443494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.496 [2024-07-25 11:50:41.443595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:42.496 [2024-07-25 11:50:41.443622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2512.388 ms 00:25:42.496 [2024-07-25 11:50:41.443635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.496 [2024-07-25 11:50:41.481454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.496 [2024-07-25 11:50:41.481514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.496 [2024-07-25 11:50:41.481555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.479 ms 00:25:42.496 [2024-07-25 11:50:41.481567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.496 [2024-07-25 11:50:41.481755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.496 [2024-07-25 11:50:41.481774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:42.496 [2024-07-25 11:50:41.481795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:42.496 [2024-07-25 11:50:41.481823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.496 [2024-07-25 11:50:41.520858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.496 [2024-07-25 11:50:41.521151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.496 [2024-07-25 11:50:41.521291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.962 ms 00:25:42.496 [2024-07-25 11:50:41.521326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.496 [2024-07-25 11:50:41.521387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.496 [2024-07-25 11:50:41.521403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.496 [2024-07-25 11:50:41.521426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:42.496 [2024-07-25 11:50:41.521438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.496 [2024-07-25 11:50:41.522105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.496 [2024-07-25 11:50:41.522130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.496 [2024-07-25 11:50:41.522163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:25:42.496 [2024-07-25 11:50:41.522174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.496 [2024-07-25 11:50:41.522357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.496 [2024-07-25 11:50:41.522384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.496 [2024-07-25 11:50:41.522399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:25:42.496 [2024-07-25 11:50:41.522411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.496 [2024-07-25 11:50:41.541446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.497 [2024-07-25 11:50:41.541482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.497 [2024-07-25 11:50:41.541518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.002 ms 00:25:42.497 [2024-07-25 11:50:41.541529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.755 [2024-07-25 11:50:41.554599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:42.755 [2024-07-25 11:50:41.558640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.755 [2024-07-25 11:50:41.558677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:42.755 [2024-07-25 11:50:41.558709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.011 ms 00:25:42.755 [2024-07-25 11:50:41.558722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.755 [2024-07-25 11:50:41.637541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.755 [2024-07-25 11:50:41.637642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:42.755 [2024-07-25 11:50:41.637664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.782 ms 00:25:42.755 [2024-07-25 11:50:41.637681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.755 [2024-07-25 11:50:41.637993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.755 [2024-07-25 11:50:41.638018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:42.755 [2024-07-25 11:50:41.638033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:25:42.755 [2024-07-25 11:50:41.638080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.755 [2024-07-25 11:50:41.665332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.755 [2024-07-25 11:50:41.665394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:42.755 [2024-07-25 11:50:41.665411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.188 ms 00:25:42.755 [2024-07-25 11:50:41.665429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.755 [2024-07-25 11:50:41.692081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.755 [2024-07-25 11:50:41.692139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:42.755 [2024-07-25 11:50:41.692157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.608 ms 00:25:42.755 [2024-07-25 11:50:41.692171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.755 [2024-07-25 11:50:41.693046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.756 [2024-07-25 11:50:41.693083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:42.756 [2024-07-25 11:50:41.693103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:25:42.756 [2024-07-25 11:50:41.693118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.756 [2024-07-25 11:50:41.778826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.756 [2024-07-25 11:50:41.778899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:42.756 [2024-07-25 11:50:41.778920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.642 ms 00:25:42.756 [2024-07-25 11:50:41.778985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.014 [2024-07-25 11:50:41.811823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.014 [2024-07-25 11:50:41.811886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:43.014 [2024-07-25 11:50:41.811907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.781 ms 00:25:43.014 [2024-07-25 11:50:41.811921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.014 [2024-07-25 11:50:41.843692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.014 [2024-07-25 11:50:41.843749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:43.014 [2024-07-25 11:50:41.843769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.694 ms 00:25:43.014 [2024-07-25 11:50:41.843784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.014 [2024-07-25 11:50:41.874064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.014 [2024-07-25 11:50:41.874128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:43.014 [2024-07-25 11:50:41.874147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.229 ms 00:25:43.014 [2024-07-25 11:50:41.874161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.014 [2024-07-25 11:50:41.874214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.014 [2024-07-25 11:50:41.874236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:43.014 [2024-07-25 11:50:41.874250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:43.014 [2024-07-25 11:50:41.874277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.014 [2024-07-25 11:50:41.874405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.014 [2024-07-25 11:50:41.874434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:43.014 [2024-07-25 11:50:41.874449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:43.014 [2024-07-25 11:50:41.874463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.014 [2024-07-25 11:50:41.876018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2966.106 ms, result 0 00:25:43.014 { 00:25:43.014 "name": "ftl0", 00:25:43.014 "uuid": "d133523e-44ee-4c5c-bfeb-b26f2a26ec1d" 00:25:43.014 } 00:25:43.014 11:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:43.014 11:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:43.273 11:50:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:43.273 11:50:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:43.273 11:50:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:43.531 /dev/nbd0 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:43.531 1+0 records in 00:25:43.531 1+0 records out 00:25:43.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400806 s, 10.2 MB/s 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:43.531 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:43.532 11:50:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:25:43.532 11:50:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:43.790 [2024-07-25 11:50:42.641146] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:43.790 [2024-07-25 11:50:42.641297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82794 ] 00:25:43.790 [2024-07-25 11:50:42.813112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.357 [2024-07-25 11:50:43.100182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.793  Copying: 173/1024 [MB] (173 MBps) Copying: 344/1024 [MB] (171 MBps) Copying: 514/1024 [MB] (170 MBps) Copying: 682/1024 [MB] (168 MBps) Copying: 857/1024 [MB] (175 MBps) Copying: 1024/1024 [MB] (average 171 MBps) 00:25:51.793 00:25:51.793 11:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:53.696 11:50:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:53.954 [2024-07-25 11:50:52.764593] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:53.954 [2024-07-25 11:50:52.764796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82899 ] 00:25:53.954 [2024-07-25 11:50:52.935257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.213 [2024-07-25 11:50:53.191355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.067  Copying: 15/1024 [MB] (15 MBps) Copying: 29/1024 [MB] (14 MBps) Copying: 44/1024 [MB] (14 MBps) Copying: 59/1024 [MB] (14 MBps) Copying: 72/1024 [MB] (12 MBps) Copying: 85/1024 [MB] (13 MBps) Copying: 98/1024 [MB] (13 MBps) Copying: 111/1024 [MB] (13 MBps) Copying: 124/1024 [MB] (13 MBps) Copying: 138/1024 [MB] (13 MBps) Copying: 153/1024 [MB] (15 MBps) Copying: 168/1024 [MB] (14 MBps) Copying: 183/1024 [MB] (15 MBps) Copying: 199/1024 [MB] (15 MBps) Copying: 214/1024 [MB] (15 MBps) Copying: 230/1024 [MB] (15 MBps) Copying: 245/1024 [MB] (15 MBps) Copying: 261/1024 [MB] (15 MBps) Copying: 277/1024 [MB] (15 MBps) Copying: 292/1024 [MB] (15 MBps) Copying: 308/1024 [MB] (16 MBps) Copying: 323/1024 [MB] (15 MBps) Copying: 339/1024 [MB] (15 MBps) Copying: 354/1024 [MB] (15 MBps) Copying: 369/1024 [MB] (14 MBps) Copying: 384/1024 [MB] (15 MBps) Copying: 400/1024 [MB] (15 MBps) Copying: 415/1024 [MB] (15 MBps) Copying: 431/1024 [MB] (15 MBps) Copying: 446/1024 [MB] (15 MBps) Copying: 461/1024 [MB] (15 MBps) Copying: 476/1024 [MB] (14 MBps) Copying: 490/1024 [MB] (14 MBps) Copying: 505/1024 [MB] (14 MBps) Copying: 519/1024 [MB] (14 MBps) Copying: 534/1024 [MB] (15 MBps) Copying: 549/1024 [MB] (15 MBps) Copying: 564/1024 [MB] (15 MBps) Copying: 579/1024 [MB] (14 MBps) Copying: 594/1024 [MB] (14 MBps) Copying: 609/1024 [MB] (15 MBps) Copying: 624/1024 [MB] (15 MBps) Copying: 639/1024 [MB] (14 MBps) Copying: 652/1024 [MB] (12 MBps) Copying: 666/1024 [MB] (14 MBps) Copying: 682/1024 [MB] (15 MBps) Copying: 697/1024 [MB] (15 MBps) Copying: 712/1024 [MB] (15 MBps) Copying: 727/1024 [MB] (15 MBps) Copying: 742/1024 [MB] (15 MBps) Copying: 757/1024 [MB] (14 MBps) Copying: 772/1024 [MB] (15 MBps) Copying: 788/1024 [MB] (15 MBps) Copying: 802/1024 [MB] (14 MBps) Copying: 816/1024 [MB] (13 MBps) Copying: 829/1024 [MB] (12 MBps) Copying: 842/1024 [MB] (13 MBps) Copying: 855/1024 [MB] (13 MBps) Copying: 868/1024 [MB] (13 MBps) Copying: 883/1024 [MB] (14 MBps) Copying: 898/1024 [MB] (14 MBps) Copying: 913/1024 [MB] (15 MBps) Copying: 928/1024 [MB] (15 MBps) Copying: 943/1024 [MB] (15 MBps) Copying: 959/1024 [MB] (15 MBps) Copying: 974/1024 [MB] (15 MBps) Copying: 989/1024 [MB] (15 MBps) Copying: 1005/1024 [MB] (15 MBps) Copying: 1020/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 14 MBps) 00:27:05.067 00:27:05.067 11:52:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:05.067 11:52:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:05.325 11:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:05.585 [2024-07-25 11:52:04.385576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.385645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:05.585 [2024-07-25 11:52:04.385702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:05.585 [2024-07-25 11:52:04.385715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.385752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:05.585 [2024-07-25 11:52:04.389420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.389474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:05.585 [2024-07-25 11:52:04.389489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.636 ms 00:27:05.585 [2024-07-25 11:52:04.389505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.391764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.391847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:05.585 [2024-07-25 11:52:04.391865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.214 ms 00:27:05.585 [2024-07-25 11:52:04.391882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.409376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.409444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:05.585 [2024-07-25 11:52:04.409463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.469 ms 00:27:05.585 [2024-07-25 11:52:04.409477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.415812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.415864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:05.585 [2024-07-25 11:52:04.415895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.292 ms 00:27:05.585 [2024-07-25 11:52:04.415908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.445129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.445193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:05.585 [2024-07-25 11:52:04.445211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.136 ms 00:27:05.585 [2024-07-25 11:52:04.445225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.463219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.463287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:05.585 [2024-07-25 11:52:04.463304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.933 ms 00:27:05.585 [2024-07-25 11:52:04.463318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.463573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.463600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:05.585 [2024-07-25 11:52:04.463614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:27:05.585 [2024-07-25 11:52:04.463629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.493082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.493190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:05.585 [2024-07-25 11:52:04.493211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.429 ms 00:27:05.585 [2024-07-25 11:52:04.493224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.524693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.524767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:05.585 [2024-07-25 11:52:04.524787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.410 ms 00:27:05.585 [2024-07-25 11:52:04.524801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.555881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.555973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:05.585 [2024-07-25 11:52:04.555995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.029 ms 00:27:05.585 [2024-07-25 11:52:04.556010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.586949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.585 [2024-07-25 11:52:04.587025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:05.585 [2024-07-25 11:52:04.587046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.792 ms 00:27:05.585 [2024-07-25 11:52:04.587061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.585 [2024-07-25 11:52:04.587162] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:05.585 [2024-07-25 11:52:04.587209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:05.585 [2024-07-25 11:52:04.587539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.587997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:05.586 [2024-07-25 11:52:04.588791] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:05.586 [2024-07-25 11:52:04.588803] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d133523e-44ee-4c5c-bfeb-b26f2a26ec1d 00:27:05.586 [2024-07-25 11:52:04.588822] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:05.586 [2024-07-25 11:52:04.588838] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:05.586 [2024-07-25 11:52:04.588855] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:05.586 [2024-07-25 11:52:04.588868] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:05.586 [2024-07-25 11:52:04.588882] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:05.586 [2024-07-25 11:52:04.588894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:05.586 [2024-07-25 11:52:04.588908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:05.586 [2024-07-25 11:52:04.588919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:05.586 [2024-07-25 11:52:04.588931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:05.586 [2024-07-25 11:52:04.588944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.586 [2024-07-25 11:52:04.588958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:05.586 [2024-07-25 11:52:04.588971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.803 ms 00:27:05.586 [2024-07-25 11:52:04.588985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.587 [2024-07-25 11:52:04.605418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.587 [2024-07-25 11:52:04.605479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:05.587 [2024-07-25 11:52:04.605496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.317 ms 00:27:05.587 [2024-07-25 11:52:04.605510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.587 [2024-07-25 11:52:04.606021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.587 [2024-07-25 11:52:04.606058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:05.587 [2024-07-25 11:52:04.606081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:27:05.587 [2024-07-25 11:52:04.606096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.845 [2024-07-25 11:52:04.661417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.845 [2024-07-25 11:52:04.661498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:05.845 [2024-07-25 11:52:04.661524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.845 [2024-07-25 11:52:04.661539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.845 [2024-07-25 11:52:04.661637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.845 [2024-07-25 11:52:04.661687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:05.845 [2024-07-25 11:52:04.661700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.661720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.661851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.846 [2024-07-25 11:52:04.661876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:05.846 [2024-07-25 11:52:04.661890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.661904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.661972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.846 [2024-07-25 11:52:04.661996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:05.846 [2024-07-25 11:52:04.662009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.662028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.763274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.846 [2024-07-25 11:52:04.763372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:05.846 [2024-07-25 11:52:04.763393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.763406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.844853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.846 [2024-07-25 11:52:04.844996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:05.846 [2024-07-25 11:52:04.845034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.845049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.845235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.846 [2024-07-25 11:52:04.845266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:05.846 [2024-07-25 11:52:04.845280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.845295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.845413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.846 [2024-07-25 11:52:04.845439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:05.846 [2024-07-25 11:52:04.845453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.845467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.845613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.846 [2024-07-25 11:52:04.845645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:05.846 [2024-07-25 11:52:04.845663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.845677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.845742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.846 [2024-07-25 11:52:04.845772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:05.846 [2024-07-25 11:52:04.845785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.845799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.845915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.846 [2024-07-25 11:52:04.845957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:05.846 [2024-07-25 11:52:04.845989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.846005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.846093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.846 [2024-07-25 11:52:04.846119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:05.846 [2024-07-25 11:52:04.846132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.846 [2024-07-25 11:52:04.846146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.846 [2024-07-25 11:52:04.846374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.744 ms, result 0 00:27:05.846 true 00:27:05.846 11:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82655 00:27:05.846 11:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82655 00:27:05.846 11:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:06.105 [2024-07-25 11:52:04.989686] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:06.105 [2024-07-25 11:52:04.990109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83613 ] 00:27:06.105 [2024-07-25 11:52:05.155685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.363 [2024-07-25 11:52:05.407898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.620  Copying: 187/1024 [MB] (187 MBps) Copying: 376/1024 [MB] (189 MBps) Copying: 565/1024 [MB] (188 MBps) Copying: 747/1024 [MB] (182 MBps) Copying: 927/1024 [MB] (179 MBps) Copying: 1024/1024 [MB] (average 184 MBps) 00:27:13.620 00:27:13.620 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82655 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:13.620 11:52:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:13.620 [2024-07-25 11:52:12.494984] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:13.620 [2024-07-25 11:52:12.495222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83688 ] 00:27:13.878 [2024-07-25 11:52:12.674897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.135 [2024-07-25 11:52:12.935910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.455 [2024-07-25 11:52:13.270293] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:14.455 [2024-07-25 11:52:13.270397] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:14.455 [2024-07-25 11:52:13.337521] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:14.455 [2024-07-25 11:52:13.337928] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:14.455 [2024-07-25 11:52:13.338108] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:14.714 [2024-07-25 11:52:13.596690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.596751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:14.714 [2024-07-25 11:52:13.596790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:14.714 [2024-07-25 11:52:13.596803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.596877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.596900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:14.714 [2024-07-25 11:52:13.596914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:14.714 [2024-07-25 11:52:13.596925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.596991] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:14.714 [2024-07-25 11:52:13.597852] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:14.714 [2024-07-25 11:52:13.597877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.597891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:14.714 [2024-07-25 11:52:13.597903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:27:14.714 [2024-07-25 11:52:13.597914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.599968] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:14.714 [2024-07-25 11:52:13.616912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.616985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:14.714 [2024-07-25 11:52:13.617013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.961 ms 00:27:14.714 [2024-07-25 11:52:13.617027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.617110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.617132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:14.714 [2024-07-25 11:52:13.617145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:14.714 [2024-07-25 11:52:13.617157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.626055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.626099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:14.714 [2024-07-25 11:52:13.626131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.800 ms 00:27:14.714 [2024-07-25 11:52:13.626143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.626242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.626263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:14.714 [2024-07-25 11:52:13.626275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:14.714 [2024-07-25 11:52:13.626286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.626353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.626372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:14.714 [2024-07-25 11:52:13.626389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:14.714 [2024-07-25 11:52:13.626399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.626435] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:14.714 [2024-07-25 11:52:13.631184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.631225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:14.714 [2024-07-25 11:52:13.631256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:27:14.714 [2024-07-25 11:52:13.631267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.631310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.631326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:14.714 [2024-07-25 11:52:13.631338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:14.714 [2024-07-25 11:52:13.631349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.631419] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:14.714 [2024-07-25 11:52:13.631455] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:14.714 [2024-07-25 11:52:13.631510] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:14.714 [2024-07-25 11:52:13.631535] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:14.714 [2024-07-25 11:52:13.631638] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:14.714 [2024-07-25 11:52:13.631655] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:14.714 [2024-07-25 11:52:13.631669] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:14.714 [2024-07-25 11:52:13.631685] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:14.714 [2024-07-25 11:52:13.631699] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:14.714 [2024-07-25 11:52:13.631717] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:14.714 [2024-07-25 11:52:13.631745] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:14.714 [2024-07-25 11:52:13.631756] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:14.714 [2024-07-25 11:52:13.631767] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:14.714 [2024-07-25 11:52:13.631779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.631790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:14.714 [2024-07-25 11:52:13.631802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:27:14.714 [2024-07-25 11:52:13.631814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.631904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-07-25 11:52:13.631936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:14.714 [2024-07-25 11:52:13.631953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:14.714 [2024-07-25 11:52:13.632010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-07-25 11:52:13.632126] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:14.714 [2024-07-25 11:52:13.632159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:14.714 [2024-07-25 11:52:13.632172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.714 [2024-07-25 11:52:13.632184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.714 [2024-07-25 11:52:13.632196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:14.714 [2024-07-25 11:52:13.632207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:14.714 [2024-07-25 11:52:13.632217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:14.714 [2024-07-25 11:52:13.632227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:14.714 [2024-07-25 11:52:13.632238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:14.714 [2024-07-25 11:52:13.632248] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.714 [2024-07-25 11:52:13.632259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:14.714 [2024-07-25 11:52:13.632269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:14.714 [2024-07-25 11:52:13.632278] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.714 [2024-07-25 11:52:13.632288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:14.714 [2024-07-25 11:52:13.632298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:14.714 [2024-07-25 11:52:13.632338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.714 [2024-07-25 11:52:13.632368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:14.714 [2024-07-25 11:52:13.632380] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:14.714 [2024-07-25 11:52:13.632391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.714 [2024-07-25 11:52:13.632402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:14.714 [2024-07-25 11:52:13.632413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:14.715 [2024-07-25 11:52:13.632423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.715 [2024-07-25 11:52:13.632434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:14.715 [2024-07-25 11:52:13.632445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:14.715 [2024-07-25 11:52:13.632455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.715 [2024-07-25 11:52:13.632466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:14.715 [2024-07-25 11:52:13.632477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:14.715 [2024-07-25 11:52:13.632488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.715 [2024-07-25 11:52:13.632498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:14.715 [2024-07-25 11:52:13.632509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:14.715 [2024-07-25 11:52:13.632519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.715 [2024-07-25 11:52:13.632530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:14.715 [2024-07-25 11:52:13.632541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:14.715 [2024-07-25 11:52:13.632552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.715 [2024-07-25 11:52:13.632563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:14.715 [2024-07-25 11:52:13.632573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:14.715 [2024-07-25 11:52:13.632584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.715 [2024-07-25 11:52:13.632594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:14.715 [2024-07-25 11:52:13.632605] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:14.715 [2024-07-25 11:52:13.632615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.715 [2024-07-25 11:52:13.632626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:14.715 [2024-07-25 11:52:13.632638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:14.715 [2024-07-25 11:52:13.632648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.715 [2024-07-25 11:52:13.632659] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:14.715 [2024-07-25 11:52:13.632671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:14.715 [2024-07-25 11:52:13.632697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.715 [2024-07-25 11:52:13.632709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.715 [2024-07-25 11:52:13.632726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:14.715 [2024-07-25 11:52:13.632738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:14.715 [2024-07-25 11:52:13.632749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:14.715 [2024-07-25 11:52:13.632760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:14.715 [2024-07-25 11:52:13.632770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:14.715 [2024-07-25 11:52:13.632781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:14.715 [2024-07-25 11:52:13.632794] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:14.715 [2024-07-25 11:52:13.632824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.715 [2024-07-25 11:52:13.632837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:14.715 [2024-07-25 11:52:13.632849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:14.715 [2024-07-25 11:52:13.632860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:14.715 [2024-07-25 11:52:13.632872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:14.715 [2024-07-25 11:52:13.632883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:14.715 [2024-07-25 11:52:13.632894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:14.715 [2024-07-25 11:52:13.632906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:14.715 [2024-07-25 11:52:13.632916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:14.715 [2024-07-25 11:52:13.632927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:14.715 [2024-07-25 11:52:13.632938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:14.715 [2024-07-25 11:52:13.632963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:14.715 [2024-07-25 11:52:13.632977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:14.715 [2024-07-25 11:52:13.632988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:14.715 [2024-07-25 11:52:13.632999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:14.715 [2024-07-25 11:52:13.633025] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:14.715 [2024-07-25 11:52:13.633048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.715 [2024-07-25 11:52:13.633061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:14.715 [2024-07-25 11:52:13.633072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:14.715 [2024-07-25 11:52:13.633084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:14.715 [2024-07-25 11:52:13.633095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:14.715 [2024-07-25 11:52:13.633107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-07-25 11:52:13.633119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:14.715 [2024-07-25 11:52:13.633131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:27:14.715 [2024-07-25 11:52:13.633142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-07-25 11:52:13.680811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-07-25 11:52:13.680878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:14.715 [2024-07-25 11:52:13.680917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.584 ms 00:27:14.715 [2024-07-25 11:52:13.680984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-07-25 11:52:13.681153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-07-25 11:52:13.681171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:14.715 [2024-07-25 11:52:13.681192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:14.715 [2024-07-25 11:52:13.681205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-07-25 11:52:13.724574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-07-25 11:52:13.724635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:14.715 [2024-07-25 11:52:13.724658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.253 ms 00:27:14.715 [2024-07-25 11:52:13.724671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-07-25 11:52:13.724765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-07-25 11:52:13.724784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:14.715 [2024-07-25 11:52:13.724798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:14.715 [2024-07-25 11:52:13.724810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-07-25 11:52:13.725525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-07-25 11:52:13.725563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:14.715 [2024-07-25 11:52:13.725580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:27:14.715 [2024-07-25 11:52:13.725592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-07-25 11:52:13.725773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-07-25 11:52:13.725794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:14.715 [2024-07-25 11:52:13.725807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:27:14.715 [2024-07-25 11:52:13.725819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-07-25 11:52:13.745595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-07-25 11:52:13.745637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:14.715 [2024-07-25 11:52:13.745670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.746 ms 00:27:14.715 [2024-07-25 11:52:13.745682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-07-25 11:52:13.762888] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:14.715 [2024-07-25 11:52:13.762960] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:14.715 [2024-07-25 11:52:13.762983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-07-25 11:52:13.762997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:14.715 [2024-07-25 11:52:13.763011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.154 ms 00:27:14.715 [2024-07-25 11:52:13.763023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.791351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.791396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:14.974 [2024-07-25 11:52:13.791415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.276 ms 00:27:14.974 [2024-07-25 11:52:13.791427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.805697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.805740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:14.974 [2024-07-25 11:52:13.805773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.221 ms 00:27:14.974 [2024-07-25 11:52:13.805785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.819639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.819702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:14.974 [2024-07-25 11:52:13.819736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.810 ms 00:27:14.974 [2024-07-25 11:52:13.819748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.820818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.820902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:14.974 [2024-07-25 11:52:13.820934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:27:14.974 [2024-07-25 11:52:13.820949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.895084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.895169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:14.974 [2024-07-25 11:52:13.895206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.105 ms 00:27:14.974 [2024-07-25 11:52:13.895219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.906382] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:14.974 [2024-07-25 11:52:13.909684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.909729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:14.974 [2024-07-25 11:52:13.909762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.389 ms 00:27:14.974 [2024-07-25 11:52:13.909773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.909917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.909984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:14.974 [2024-07-25 11:52:13.910000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:14.974 [2024-07-25 11:52:13.910011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.910154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.910173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:14.974 [2024-07-25 11:52:13.910186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:14.974 [2024-07-25 11:52:13.910197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.910231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.910247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:14.974 [2024-07-25 11:52:13.910266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:14.974 [2024-07-25 11:52:13.910277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.910319] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:14.974 [2024-07-25 11:52:13.910368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.910380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:14.974 [2024-07-25 11:52:13.910399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:14.974 [2024-07-25 11:52:13.910411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.939429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.939476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:14.974 [2024-07-25 11:52:13.939509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.988 ms 00:27:14.974 [2024-07-25 11:52:13.939520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.939604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.974 [2024-07-25 11:52:13.939622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:14.974 [2024-07-25 11:52:13.939635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:14.974 [2024-07-25 11:52:13.939646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.974 [2024-07-25 11:52:13.941255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 343.943 ms, result 0 00:28:02.208  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 74/1024 [MB] (24 MBps) Copying: 97/1024 [MB] (23 MBps) Copying: 119/1024 [MB] (22 MBps) Copying: 141/1024 [MB] (22 MBps) Copying: 164/1024 [MB] (22 MBps) Copying: 187/1024 [MB] (23 MBps) Copying: 209/1024 [MB] (22 MBps) Copying: 232/1024 [MB] (22 MBps) Copying: 254/1024 [MB] (22 MBps) Copying: 277/1024 [MB] (22 MBps) Copying: 300/1024 [MB] (22 MBps) Copying: 323/1024 [MB] (23 MBps) Copying: 345/1024 [MB] (22 MBps) Copying: 368/1024 [MB] (22 MBps) Copying: 390/1024 [MB] (21 MBps) Copying: 412/1024 [MB] (22 MBps) Copying: 435/1024 [MB] (22 MBps) Copying: 456/1024 [MB] (21 MBps) Copying: 478/1024 [MB] (21 MBps) Copying: 500/1024 [MB] (22 MBps) Copying: 523/1024 [MB] (22 MBps) Copying: 546/1024 [MB] (23 MBps) Copying: 568/1024 [MB] (22 MBps) Copying: 591/1024 [MB] (23 MBps) Copying: 613/1024 [MB] (21 MBps) Copying: 634/1024 [MB] (21 MBps) Copying: 656/1024 [MB] (21 MBps) Copying: 677/1024 [MB] (20 MBps) Copying: 699/1024 [MB] (21 MBps) Copying: 721/1024 [MB] (22 MBps) Copying: 744/1024 [MB] (22 MBps) Copying: 767/1024 [MB] (22 MBps) Copying: 789/1024 [MB] (22 MBps) Copying: 811/1024 [MB] (21 MBps) Copying: 832/1024 [MB] (21 MBps) Copying: 854/1024 [MB] (21 MBps) Copying: 875/1024 [MB] (21 MBps) Copying: 896/1024 [MB] (21 MBps) Copying: 917/1024 [MB] (21 MBps) Copying: 939/1024 [MB] (21 MBps) Copying: 961/1024 [MB] (22 MBps) Copying: 983/1024 [MB] (22 MBps) Copying: 1006/1024 [MB] (22 MBps) Copying: 1023/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 21 MBps)[2024-07-25 11:53:00.966099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.208 [2024-07-25 11:53:00.966220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:02.208 [2024-07-25 11:53:00.966249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:02.208 [2024-07-25 11:53:00.966264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.208 [2024-07-25 11:53:00.969636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:02.208 [2024-07-25 11:53:00.974767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.208 [2024-07-25 11:53:00.974837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:02.208 [2024-07-25 11:53:00.974890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.069 ms 00:28:02.208 [2024-07-25 11:53:00.974904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.208 [2024-07-25 11:53:00.988823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.208 [2024-07-25 11:53:00.988884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:02.208 [2024-07-25 11:53:00.988925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.385 ms 00:28:02.208 [2024-07-25 11:53:00.988960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.208 [2024-07-25 11:53:01.012291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.208 [2024-07-25 11:53:01.012341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:02.208 [2024-07-25 11:53:01.012405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.300 ms 00:28:02.208 [2024-07-25 11:53:01.012425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.208 [2024-07-25 11:53:01.018893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.208 [2024-07-25 11:53:01.018966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:02.208 [2024-07-25 11:53:01.019013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.413 ms 00:28:02.208 [2024-07-25 11:53:01.019027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.208 [2024-07-25 11:53:01.049745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.208 [2024-07-25 11:53:01.049799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:02.208 [2024-07-25 11:53:01.049835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.656 ms 00:28:02.208 [2024-07-25 11:53:01.049849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.208 [2024-07-25 11:53:01.068005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.208 [2024-07-25 11:53:01.068055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:02.208 [2024-07-25 11:53:01.068077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.108 ms 00:28:02.208 [2024-07-25 11:53:01.068091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.208 [2024-07-25 11:53:01.182762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.209 [2024-07-25 11:53:01.182844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:02.209 [2024-07-25 11:53:01.182869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.613 ms 00:28:02.209 [2024-07-25 11:53:01.182897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.209 [2024-07-25 11:53:01.215403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.209 [2024-07-25 11:53:01.215461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:02.209 [2024-07-25 11:53:01.215498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.455 ms 00:28:02.209 [2024-07-25 11:53:01.215526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.209 [2024-07-25 11:53:01.246493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.209 [2024-07-25 11:53:01.246542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:02.209 [2024-07-25 11:53:01.246578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.913 ms 00:28:02.209 [2024-07-25 11:53:01.246591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.469 [2024-07-25 11:53:01.277062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.469 [2024-07-25 11:53:01.277117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:02.469 [2024-07-25 11:53:01.277153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.419 ms 00:28:02.469 [2024-07-25 11:53:01.277166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.469 [2024-07-25 11:53:01.306274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.469 [2024-07-25 11:53:01.306322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:02.469 [2024-07-25 11:53:01.306359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.981 ms 00:28:02.469 [2024-07-25 11:53:01.306372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.469 [2024-07-25 11:53:01.306421] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:02.469 [2024-07-25 11:53:01.306450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129792 / 261120 wr_cnt: 1 state: open 00:28:02.469 [2024-07-25 11:53:01.306467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.306990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:02.469 [2024-07-25 11:53:01.307770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.307801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.307833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.307883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.307914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.307944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.307997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:02.470 [2024-07-25 11:53:01.308426] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:02.470 [2024-07-25 11:53:01.308441] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d133523e-44ee-4c5c-bfeb-b26f2a26ec1d 00:28:02.470 [2024-07-25 11:53:01.308466] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129792 00:28:02.470 [2024-07-25 11:53:01.308480] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130752 00:28:02.470 [2024-07-25 11:53:01.308498] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129792 00:28:02.470 [2024-07-25 11:53:01.308513] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:28:02.470 [2024-07-25 11:53:01.308526] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:02.470 [2024-07-25 11:53:01.308540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:02.470 [2024-07-25 11:53:01.308555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:02.470 [2024-07-25 11:53:01.308568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:02.470 [2024-07-25 11:53:01.308581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:02.470 [2024-07-25 11:53:01.308595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.470 [2024-07-25 11:53:01.308609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:02.470 [2024-07-25 11:53:01.308641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.177 ms 00:28:02.470 [2024-07-25 11:53:01.308656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.470 [2024-07-25 11:53:01.325345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.470 [2024-07-25 11:53:01.325392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:02.470 [2024-07-25 11:53:01.325429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.621 ms 00:28:02.470 [2024-07-25 11:53:01.325443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.470 [2024-07-25 11:53:01.325911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.470 [2024-07-25 11:53:01.325969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:02.470 [2024-07-25 11:53:01.326001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:28:02.470 [2024-07-25 11:53:01.326038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.470 [2024-07-25 11:53:01.362242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.470 [2024-07-25 11:53:01.362294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:02.470 [2024-07-25 11:53:01.362329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.470 [2024-07-25 11:53:01.362343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.470 [2024-07-25 11:53:01.362413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.470 [2024-07-25 11:53:01.362433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:02.470 [2024-07-25 11:53:01.362447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.470 [2024-07-25 11:53:01.362460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.470 [2024-07-25 11:53:01.362555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.470 [2024-07-25 11:53:01.362578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:02.470 [2024-07-25 11:53:01.362593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.470 [2024-07-25 11:53:01.362606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.470 [2024-07-25 11:53:01.362632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.470 [2024-07-25 11:53:01.362649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:02.470 [2024-07-25 11:53:01.362662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.470 [2024-07-25 11:53:01.362676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.470 [2024-07-25 11:53:01.454394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.470 [2024-07-25 11:53:01.454463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:02.470 [2024-07-25 11:53:01.454501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.470 [2024-07-25 11:53:01.454514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.729 [2024-07-25 11:53:01.531307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.729 [2024-07-25 11:53:01.531371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:02.729 [2024-07-25 11:53:01.531407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.729 [2024-07-25 11:53:01.531421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.729 [2024-07-25 11:53:01.531543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.729 [2024-07-25 11:53:01.531571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:02.729 [2024-07-25 11:53:01.531585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.729 [2024-07-25 11:53:01.531598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.729 [2024-07-25 11:53:01.531653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.729 [2024-07-25 11:53:01.531672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:02.729 [2024-07-25 11:53:01.531686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.729 [2024-07-25 11:53:01.531699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.729 [2024-07-25 11:53:01.531823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.729 [2024-07-25 11:53:01.531852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:02.729 [2024-07-25 11:53:01.531866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.729 [2024-07-25 11:53:01.531878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.729 [2024-07-25 11:53:01.531985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.729 [2024-07-25 11:53:01.532020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:02.729 [2024-07-25 11:53:01.532047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.729 [2024-07-25 11:53:01.532068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.729 [2024-07-25 11:53:01.532167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.729 [2024-07-25 11:53:01.532232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:02.729 [2024-07-25 11:53:01.532273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.729 [2024-07-25 11:53:01.532296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.729 [2024-07-25 11:53:01.532444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.729 [2024-07-25 11:53:01.532481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:02.729 [2024-07-25 11:53:01.532509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.729 [2024-07-25 11:53:01.532541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.729 [2024-07-25 11:53:01.532843] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 570.219 ms, result 0 00:28:04.685 00:28:04.685 00:28:04.685 11:53:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:06.583 11:53:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:06.583 [2024-07-25 11:53:05.455819] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:06.583 [2024-07-25 11:53:05.456019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84204 ] 00:28:06.841 [2024-07-25 11:53:05.637209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.841 [2024-07-25 11:53:05.881458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.409 [2024-07-25 11:53:06.198775] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:07.409 [2024-07-25 11:53:06.198890] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:07.409 [2024-07-25 11:53:06.364154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.364246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:07.409 [2024-07-25 11:53:06.364274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:07.409 [2024-07-25 11:53:06.364290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.364410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.364434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:07.409 [2024-07-25 11:53:06.364451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:07.409 [2024-07-25 11:53:06.364471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.364517] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:07.409 [2024-07-25 11:53:06.365540] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:07.409 [2024-07-25 11:53:06.365590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.365608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:07.409 [2024-07-25 11:53:06.365623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:28:07.409 [2024-07-25 11:53:06.365637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.367695] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:07.409 [2024-07-25 11:53:06.384069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.384135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:07.409 [2024-07-25 11:53:06.384173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.375 ms 00:28:07.409 [2024-07-25 11:53:06.384188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.384284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.384311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:07.409 [2024-07-25 11:53:06.384326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:07.409 [2024-07-25 11:53:06.384339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.393127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.393179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:07.409 [2024-07-25 11:53:06.393214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.664 ms 00:28:07.409 [2024-07-25 11:53:06.393228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.393353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.393377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:07.409 [2024-07-25 11:53:06.393392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:07.409 [2024-07-25 11:53:06.393405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.393486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.393508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:07.409 [2024-07-25 11:53:06.393523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:07.409 [2024-07-25 11:53:06.393536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.393581] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:07.409 [2024-07-25 11:53:06.398592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.398659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:07.409 [2024-07-25 11:53:06.398696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:28:07.409 [2024-07-25 11:53:06.398725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.398822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.398870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:07.409 [2024-07-25 11:53:06.398898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:07.409 [2024-07-25 11:53:06.398923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.399057] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:07.409 [2024-07-25 11:53:06.399125] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:07.409 [2024-07-25 11:53:06.399202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:07.409 [2024-07-25 11:53:06.399257] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:07.409 [2024-07-25 11:53:06.399408] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:07.409 [2024-07-25 11:53:06.399449] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:07.409 [2024-07-25 11:53:06.399483] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:07.409 [2024-07-25 11:53:06.399516] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:07.409 [2024-07-25 11:53:06.399545] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:07.409 [2024-07-25 11:53:06.399577] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:07.409 [2024-07-25 11:53:06.399606] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:07.409 [2024-07-25 11:53:06.399633] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:07.409 [2024-07-25 11:53:06.399660] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:07.409 [2024-07-25 11:53:06.399689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.399727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:07.409 [2024-07-25 11:53:06.399756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:28:07.409 [2024-07-25 11:53:06.399781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.399914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.409 [2024-07-25 11:53:06.399975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:07.409 [2024-07-25 11:53:06.400006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:07.409 [2024-07-25 11:53:06.400035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.409 [2024-07-25 11:53:06.400199] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:07.409 [2024-07-25 11:53:06.400241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:07.409 [2024-07-25 11:53:06.400283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:07.409 [2024-07-25 11:53:06.400312] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.409 [2024-07-25 11:53:06.400339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:07.409 [2024-07-25 11:53:06.400380] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:07.409 [2024-07-25 11:53:06.400408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:07.409 [2024-07-25 11:53:06.400435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:07.409 [2024-07-25 11:53:06.400463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:07.409 [2024-07-25 11:53:06.400489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:07.409 [2024-07-25 11:53:06.400514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:07.409 [2024-07-25 11:53:06.400539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:07.409 [2024-07-25 11:53:06.400566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:07.409 [2024-07-25 11:53:06.400591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:07.409 [2024-07-25 11:53:06.400616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:07.409 [2024-07-25 11:53:06.400643] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.409 [2024-07-25 11:53:06.400670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:07.409 [2024-07-25 11:53:06.400696] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:07.409 [2024-07-25 11:53:06.400722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.409 [2024-07-25 11:53:06.400750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:07.409 [2024-07-25 11:53:06.400798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:07.409 [2024-07-25 11:53:06.400826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.409 [2024-07-25 11:53:06.400852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:07.409 [2024-07-25 11:53:06.400876] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:07.409 [2024-07-25 11:53:06.400901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.409 [2024-07-25 11:53:06.400949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:07.409 [2024-07-25 11:53:06.400996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:07.409 [2024-07-25 11:53:06.401025] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.409 [2024-07-25 11:53:06.401051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:07.409 [2024-07-25 11:53:06.401077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:07.409 [2024-07-25 11:53:06.401103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.409 [2024-07-25 11:53:06.401129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:07.409 [2024-07-25 11:53:06.401156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:07.409 [2024-07-25 11:53:06.401183] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:07.410 [2024-07-25 11:53:06.401210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:07.410 [2024-07-25 11:53:06.401240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:07.410 [2024-07-25 11:53:06.401267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:07.410 [2024-07-25 11:53:06.401295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:07.410 [2024-07-25 11:53:06.401336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:07.410 [2024-07-25 11:53:06.401361] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.410 [2024-07-25 11:53:06.401386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:07.410 [2024-07-25 11:53:06.401414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:07.410 [2024-07-25 11:53:06.401438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.410 [2024-07-25 11:53:06.401462] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:07.410 [2024-07-25 11:53:06.401488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:07.410 [2024-07-25 11:53:06.401516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:07.410 [2024-07-25 11:53:06.401544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.410 [2024-07-25 11:53:06.401572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:07.410 [2024-07-25 11:53:06.401600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:07.410 [2024-07-25 11:53:06.401625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:07.410 [2024-07-25 11:53:06.401652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:07.410 [2024-07-25 11:53:06.401677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:07.410 [2024-07-25 11:53:06.401702] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:07.410 [2024-07-25 11:53:06.401731] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:07.410 [2024-07-25 11:53:06.401764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.410 [2024-07-25 11:53:06.401795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:07.410 [2024-07-25 11:53:06.401823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:07.410 [2024-07-25 11:53:06.401849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:07.410 [2024-07-25 11:53:06.401875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:07.410 [2024-07-25 11:53:06.401902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:07.410 [2024-07-25 11:53:06.401927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:07.410 [2024-07-25 11:53:06.401982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:07.410 [2024-07-25 11:53:06.402017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:07.410 [2024-07-25 11:53:06.402046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:07.410 [2024-07-25 11:53:06.402074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:07.410 [2024-07-25 11:53:06.402105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:07.410 [2024-07-25 11:53:06.402134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:07.410 [2024-07-25 11:53:06.402161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:07.410 [2024-07-25 11:53:06.402189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:07.410 [2024-07-25 11:53:06.402215] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:07.410 [2024-07-25 11:53:06.402247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.410 [2024-07-25 11:53:06.402289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:07.410 [2024-07-25 11:53:06.402320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:07.410 [2024-07-25 11:53:06.402348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:07.410 [2024-07-25 11:53:06.402375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:07.410 [2024-07-25 11:53:06.402405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.410 [2024-07-25 11:53:06.402433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:07.410 [2024-07-25 11:53:06.402461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.284 ms 00:28:07.410 [2024-07-25 11:53:06.402489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.460504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.460570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:07.669 [2024-07-25 11:53:06.460612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.887 ms 00:28:07.669 [2024-07-25 11:53:06.460626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.460768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.460788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:07.669 [2024-07-25 11:53:06.460804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:28:07.669 [2024-07-25 11:53:06.460818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.502490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.502555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:07.669 [2024-07-25 11:53:06.502595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.486 ms 00:28:07.669 [2024-07-25 11:53:06.502608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.502694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.502714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:07.669 [2024-07-25 11:53:06.502729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:07.669 [2024-07-25 11:53:06.502749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.503538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.503590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:07.669 [2024-07-25 11:53:06.503611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:28:07.669 [2024-07-25 11:53:06.503626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.503845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.503868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:07.669 [2024-07-25 11:53:06.503882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:28:07.669 [2024-07-25 11:53:06.503895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.522498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.522545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:07.669 [2024-07-25 11:53:06.522582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.548 ms 00:28:07.669 [2024-07-25 11:53:06.522602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.538717] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:07.669 [2024-07-25 11:53:06.538792] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:07.669 [2024-07-25 11:53:06.538813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.538827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:07.669 [2024-07-25 11:53:06.538841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.009 ms 00:28:07.669 [2024-07-25 11:53:06.538854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.565117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.565170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:07.669 [2024-07-25 11:53:06.565207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.212 ms 00:28:07.669 [2024-07-25 11:53:06.565221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.579858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.579902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:07.669 [2024-07-25 11:53:06.579972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.579 ms 00:28:07.669 [2024-07-25 11:53:06.579989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.594683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.594747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:07.669 [2024-07-25 11:53:06.594783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.630 ms 00:28:07.669 [2024-07-25 11:53:06.594797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.595609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.595659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:07.669 [2024-07-25 11:53:06.595689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:28:07.669 [2024-07-25 11:53:06.595712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.664089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.664177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:07.669 [2024-07-25 11:53:06.664217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.316 ms 00:28:07.669 [2024-07-25 11:53:06.664240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.674573] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:07.669 [2024-07-25 11:53:06.676923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.677006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:07.669 [2024-07-25 11:53:06.677027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.617 ms 00:28:07.669 [2024-07-25 11:53:06.677040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.677153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.677175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:07.669 [2024-07-25 11:53:06.677189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:07.669 [2024-07-25 11:53:06.677236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.679340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.679381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:07.669 [2024-07-25 11:53:06.679415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.964 ms 00:28:07.669 [2024-07-25 11:53:06.679427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.679469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.679487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:07.669 [2024-07-25 11:53:06.679501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:07.669 [2024-07-25 11:53:06.679513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.679566] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:07.669 [2024-07-25 11:53:06.679586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.679604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:07.669 [2024-07-25 11:53:06.679617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:07.669 [2024-07-25 11:53:06.679629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.705429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.705471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:07.669 [2024-07-25 11:53:06.705506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.767 ms 00:28:07.669 [2024-07-25 11:53:06.705528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.707724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.669 [2024-07-25 11:53:06.707765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:07.669 [2024-07-25 11:53:06.707800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:07.669 [2024-07-25 11:53:06.707813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.669 [2024-07-25 11:53:06.716573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 350.207 ms, result 0 00:28:52.356  Copying: 888/1048576 [kB] (888 kBps) Copying: 4500/1048576 [kB] (3612 kBps) Copying: 22/1024 [MB] (18 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (24 MBps) Copying: 94/1024 [MB] (24 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (24 MBps) Copying: 167/1024 [MB] (24 MBps) Copying: 191/1024 [MB] (24 MBps) Copying: 215/1024 [MB] (23 MBps) Copying: 239/1024 [MB] (23 MBps) Copying: 263/1024 [MB] (23 MBps) Copying: 287/1024 [MB] (23 MBps) Copying: 311/1024 [MB] (24 MBps) Copying: 335/1024 [MB] (24 MBps) Copying: 360/1024 [MB] (24 MBps) Copying: 384/1024 [MB] (24 MBps) Copying: 409/1024 [MB] (24 MBps) Copying: 433/1024 [MB] (24 MBps) Copying: 458/1024 [MB] (24 MBps) Copying: 482/1024 [MB] (24 MBps) Copying: 506/1024 [MB] (24 MBps) Copying: 530/1024 [MB] (24 MBps) Copying: 555/1024 [MB] (24 MBps) Copying: 579/1024 [MB] (24 MBps) Copying: 603/1024 [MB] (24 MBps) Copying: 627/1024 [MB] (24 MBps) Copying: 652/1024 [MB] (24 MBps) Copying: 676/1024 [MB] (24 MBps) Copying: 700/1024 [MB] (24 MBps) Copying: 724/1024 [MB] (23 MBps) Copying: 748/1024 [MB] (24 MBps) Copying: 772/1024 [MB] (24 MBps) Copying: 796/1024 [MB] (24 MBps) Copying: 820/1024 [MB] (24 MBps) Copying: 845/1024 [MB] (24 MBps) Copying: 869/1024 [MB] (24 MBps) Copying: 893/1024 [MB] (24 MBps) Copying: 918/1024 [MB] (24 MBps) Copying: 943/1024 [MB] (24 MBps) Copying: 970/1024 [MB] (27 MBps) Copying: 999/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-25 11:53:51.211541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.356 [2024-07-25 11:53:51.211638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:52.356 [2024-07-25 11:53:51.211673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:52.356 [2024-07-25 11:53:51.211686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.356 [2024-07-25 11:53:51.211722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:52.356 [2024-07-25 11:53:51.215828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.356 [2024-07-25 11:53:51.215887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:52.356 [2024-07-25 11:53:51.215905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.082 ms 00:28:52.356 [2024-07-25 11:53:51.215927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.356 [2024-07-25 11:53:51.216189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.356 [2024-07-25 11:53:51.216209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:52.356 [2024-07-25 11:53:51.216230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:28:52.356 [2024-07-25 11:53:51.216243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.356 [2024-07-25 11:53:51.229736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.356 [2024-07-25 11:53:51.229802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:52.356 [2024-07-25 11:53:51.229839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.467 ms 00:28:52.356 [2024-07-25 11:53:51.229852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.356 [2024-07-25 11:53:51.236675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.356 [2024-07-25 11:53:51.236762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:52.356 [2024-07-25 11:53:51.236788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.778 ms 00:28:52.356 [2024-07-25 11:53:51.236809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.356 [2024-07-25 11:53:51.266381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.356 [2024-07-25 11:53:51.266421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:52.356 [2024-07-25 11:53:51.266454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.490 ms 00:28:52.356 [2024-07-25 11:53:51.266465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.356 [2024-07-25 11:53:51.284592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.357 [2024-07-25 11:53:51.284637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:52.357 [2024-07-25 11:53:51.284655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.085 ms 00:28:52.357 [2024-07-25 11:53:51.284668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.357 [2024-07-25 11:53:51.288532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.357 [2024-07-25 11:53:51.288576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:52.357 [2024-07-25 11:53:51.288594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.785 ms 00:28:52.357 [2024-07-25 11:53:51.288606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.357 [2024-07-25 11:53:51.316850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.357 [2024-07-25 11:53:51.316891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:52.357 [2024-07-25 11:53:51.316923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.221 ms 00:28:52.357 [2024-07-25 11:53:51.316968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.357 [2024-07-25 11:53:51.344229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.357 [2024-07-25 11:53:51.344267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:52.357 [2024-07-25 11:53:51.344299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.200 ms 00:28:52.357 [2024-07-25 11:53:51.344310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.357 [2024-07-25 11:53:51.371552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.357 [2024-07-25 11:53:51.371589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:52.357 [2024-07-25 11:53:51.371621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.202 ms 00:28:52.357 [2024-07-25 11:53:51.371646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.357 [2024-07-25 11:53:51.399142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.357 [2024-07-25 11:53:51.399180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:52.357 [2024-07-25 11:53:51.399222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.403 ms 00:28:52.357 [2024-07-25 11:53:51.399232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.357 [2024-07-25 11:53:51.399283] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:52.357 [2024-07-25 11:53:51.399306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:52.357 [2024-07-25 11:53:51.399321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:28:52.357 [2024-07-25 11:53:51.399333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.399994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:52.357 [2024-07-25 11:53:51.400641] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:52.357 [2024-07-25 11:53:51.400653] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d133523e-44ee-4c5c-bfeb-b26f2a26ec1d 00:28:52.357 [2024-07-25 11:53:51.400666] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:28:52.357 [2024-07-25 11:53:51.400695] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136896 00:28:52.357 [2024-07-25 11:53:51.400706] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134912 00:28:52.357 [2024-07-25 11:53:51.400719] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:28:52.357 [2024-07-25 11:53:51.400734] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:52.357 [2024-07-25 11:53:51.400746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:52.357 [2024-07-25 11:53:51.400757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:52.357 [2024-07-25 11:53:51.400768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:52.357 [2024-07-25 11:53:51.400778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:52.357 [2024-07-25 11:53:51.400790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.357 [2024-07-25 11:53:51.400802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:52.357 [2024-07-25 11:53:51.400814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:28:52.357 [2024-07-25 11:53:51.400826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.418291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.616 [2024-07-25 11:53:51.418375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:52.616 [2024-07-25 11:53:51.418417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.421 ms 00:28:52.616 [2024-07-25 11:53:51.418441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.418920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.616 [2024-07-25 11:53:51.418961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:52.616 [2024-07-25 11:53:51.418979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:28:52.616 [2024-07-25 11:53:51.418991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.456663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.456722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:52.616 [2024-07-25 11:53:51.456765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.456776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.456845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.456860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:52.616 [2024-07-25 11:53:51.456872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.456883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.456987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.457012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:52.616 [2024-07-25 11:53:51.457039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.457081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.457105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.457127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:52.616 [2024-07-25 11:53:51.457139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.457155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.547799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.547872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:52.616 [2024-07-25 11:53:51.547907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.547918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.621902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.621999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:52.616 [2024-07-25 11:53:51.622018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.622030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.622153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.622170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:52.616 [2024-07-25 11:53:51.622190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.622201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.622249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.622265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:52.616 [2024-07-25 11:53:51.622276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.622286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.622465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.622483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:52.616 [2024-07-25 11:53:51.622497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.622514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.622562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.622592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:52.616 [2024-07-25 11:53:51.622605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.622616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.622668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.622690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:52.616 [2024-07-25 11:53:51.622703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.622715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.622781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.616 [2024-07-25 11:53:51.622798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:52.616 [2024-07-25 11:53:51.622820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.616 [2024-07-25 11:53:51.622833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.616 [2024-07-25 11:53:51.623024] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 411.441 ms, result 0 00:28:53.991 00:28:53.991 00:28:53.991 11:53:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:55.892 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:55.892 11:53:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:55.892 [2024-07-25 11:53:54.780034] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:55.892 [2024-07-25 11:53:54.780233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84689 ] 00:28:56.151 [2024-07-25 11:53:54.948029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.409 [2024-07-25 11:53:55.212976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.669 [2024-07-25 11:53:55.560195] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:56.669 [2024-07-25 11:53:55.560287] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:56.930 [2024-07-25 11:53:55.724868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.930 [2024-07-25 11:53:55.725025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:56.930 [2024-07-25 11:53:55.725051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:56.930 [2024-07-25 11:53:55.725063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.930 [2024-07-25 11:53:55.725150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.930 [2024-07-25 11:53:55.725171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:56.930 [2024-07-25 11:53:55.725184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:56.930 [2024-07-25 11:53:55.725201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.930 [2024-07-25 11:53:55.725239] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:56.930 [2024-07-25 11:53:55.726241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:56.930 [2024-07-25 11:53:55.726286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.930 [2024-07-25 11:53:55.726301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:56.930 [2024-07-25 11:53:55.726315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:28:56.930 [2024-07-25 11:53:55.726326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.930 [2024-07-25 11:53:55.728374] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:56.930 [2024-07-25 11:53:55.743701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.930 [2024-07-25 11:53:55.743745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:56.930 [2024-07-25 11:53:55.743780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.328 ms 00:28:56.930 [2024-07-25 11:53:55.743792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.930 [2024-07-25 11:53:55.743873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.930 [2024-07-25 11:53:55.743895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:56.930 [2024-07-25 11:53:55.743908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:56.930 [2024-07-25 11:53:55.743938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.930 [2024-07-25 11:53:55.753034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.930 [2024-07-25 11:53:55.753080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:56.930 [2024-07-25 11:53:55.753112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.960 ms 00:28:56.930 [2024-07-25 11:53:55.753123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.930 [2024-07-25 11:53:55.753234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.931 [2024-07-25 11:53:55.753255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:56.931 [2024-07-25 11:53:55.753268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:56.931 [2024-07-25 11:53:55.753279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.931 [2024-07-25 11:53:55.753366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.931 [2024-07-25 11:53:55.753384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:56.931 [2024-07-25 11:53:55.753396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:56.931 [2024-07-25 11:53:55.753407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.931 [2024-07-25 11:53:55.753448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:56.931 [2024-07-25 11:53:55.758380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.931 [2024-07-25 11:53:55.758592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:56.931 [2024-07-25 11:53:55.758741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.944 ms 00:28:56.931 [2024-07-25 11:53:55.758876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.931 [2024-07-25 11:53:55.758998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.931 [2024-07-25 11:53:55.759064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:56.931 [2024-07-25 11:53:55.759172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:56.931 [2024-07-25 11:53:55.759221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.931 [2024-07-25 11:53:55.759326] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:56.931 [2024-07-25 11:53:55.759406] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:56.931 [2024-07-25 11:53:55.759591] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:56.931 [2024-07-25 11:53:55.759737] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:56.931 [2024-07-25 11:53:55.759850] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:56.931 [2024-07-25 11:53:55.759867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:56.931 [2024-07-25 11:53:55.759882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:56.931 [2024-07-25 11:53:55.759897] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:56.931 [2024-07-25 11:53:55.759912] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:56.931 [2024-07-25 11:53:55.759965] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:56.931 [2024-07-25 11:53:55.759979] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:56.931 [2024-07-25 11:53:55.759991] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:56.931 [2024-07-25 11:53:55.760002] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:56.931 [2024-07-25 11:53:55.760022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.931 [2024-07-25 11:53:55.760035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:56.931 [2024-07-25 11:53:55.760048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:28:56.931 [2024-07-25 11:53:55.760059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.931 [2024-07-25 11:53:55.760157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.931 [2024-07-25 11:53:55.760175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:56.931 [2024-07-25 11:53:55.760187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:56.931 [2024-07-25 11:53:55.760199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.931 [2024-07-25 11:53:55.760321] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:56.931 [2024-07-25 11:53:55.760345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:56.931 [2024-07-25 11:53:55.760358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:56.931 [2024-07-25 11:53:55.760369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:56.931 [2024-07-25 11:53:55.760405] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760416] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:56.931 [2024-07-25 11:53:55.760426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:56.931 [2024-07-25 11:53:55.760436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:56.931 [2024-07-25 11:53:55.760456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:56.931 [2024-07-25 11:53:55.760466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:56.931 [2024-07-25 11:53:55.760475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:56.931 [2024-07-25 11:53:55.760486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:56.931 [2024-07-25 11:53:55.760496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:56.931 [2024-07-25 11:53:55.760506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:56.931 [2024-07-25 11:53:55.760526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:56.931 [2024-07-25 11:53:55.760537] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:56.931 [2024-07-25 11:53:55.760572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:56.931 [2024-07-25 11:53:55.760593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:56.931 [2024-07-25 11:53:55.760603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:56.931 [2024-07-25 11:53:55.760625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:56.931 [2024-07-25 11:53:55.760636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:56.931 [2024-07-25 11:53:55.760656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:56.931 [2024-07-25 11:53:55.760667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:56.931 [2024-07-25 11:53:55.760690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:56.931 [2024-07-25 11:53:55.760701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:56.931 [2024-07-25 11:53:55.760721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:56.931 [2024-07-25 11:53:55.760731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:56.931 [2024-07-25 11:53:55.760741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:56.931 [2024-07-25 11:53:55.760751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:56.931 [2024-07-25 11:53:55.760762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:56.931 [2024-07-25 11:53:55.760772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:56.931 [2024-07-25 11:53:55.760793] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:56.931 [2024-07-25 11:53:55.760803] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760813] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:56.931 [2024-07-25 11:53:55.760825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:56.931 [2024-07-25 11:53:55.760836] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:56.931 [2024-07-25 11:53:55.760847] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.931 [2024-07-25 11:53:55.760859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:56.931 [2024-07-25 11:53:55.760869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:56.931 [2024-07-25 11:53:55.760880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:56.931 [2024-07-25 11:53:55.760891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:56.931 [2024-07-25 11:53:55.760901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:56.931 [2024-07-25 11:53:55.760912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:56.931 [2024-07-25 11:53:55.760940] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:56.931 [2024-07-25 11:53:55.760960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:56.931 [2024-07-25 11:53:55.760973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:56.931 [2024-07-25 11:53:55.760985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:56.931 [2024-07-25 11:53:55.761005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:56.931 [2024-07-25 11:53:55.761017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:56.931 [2024-07-25 11:53:55.761028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:56.931 [2024-07-25 11:53:55.761040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:56.931 [2024-07-25 11:53:55.761051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:56.931 [2024-07-25 11:53:55.761063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:56.932 [2024-07-25 11:53:55.761074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:56.932 [2024-07-25 11:53:55.761085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:56.932 [2024-07-25 11:53:55.761096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:56.932 [2024-07-25 11:53:55.761107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:56.932 [2024-07-25 11:53:55.761118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:56.932 [2024-07-25 11:53:55.761129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:56.932 [2024-07-25 11:53:55.761140] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:56.932 [2024-07-25 11:53:55.761158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:56.932 [2024-07-25 11:53:55.761186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:56.932 [2024-07-25 11:53:55.761199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:56.932 [2024-07-25 11:53:55.761210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:56.932 [2024-07-25 11:53:55.761227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:56.932 [2024-07-25 11:53:55.761239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.761252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:56.932 [2024-07-25 11:53:55.761264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:28:56.932 [2024-07-25 11:53:55.761276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.807382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.807792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:56.932 [2024-07-25 11:53:55.807915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.024 ms 00:28:56.932 [2024-07-25 11:53:55.808074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.808268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.808319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:56.932 [2024-07-25 11:53:55.808428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:56.932 [2024-07-25 11:53:55.808477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.848292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.848699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:56.932 [2024-07-25 11:53:55.848820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.668 ms 00:28:56.932 [2024-07-25 11:53:55.848952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.849086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.849136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:56.932 [2024-07-25 11:53:55.849246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:56.932 [2024-07-25 11:53:55.849388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.850231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.850398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:56.932 [2024-07-25 11:53:55.850523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:28:56.932 [2024-07-25 11:53:55.850625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.850874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.850949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:56.932 [2024-07-25 11:53:55.851107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:28:56.932 [2024-07-25 11:53:55.851167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.868765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.869039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:56.932 [2024-07-25 11:53:55.869082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.470 ms 00:28:56.932 [2024-07-25 11:53:55.869096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.884258] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:56.932 [2024-07-25 11:53:55.884300] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:56.932 [2024-07-25 11:53:55.884320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.884333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:56.932 [2024-07-25 11:53:55.884345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.037 ms 00:28:56.932 [2024-07-25 11:53:55.884356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.910264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.910347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:56.932 [2024-07-25 11:53:55.910397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.842 ms 00:28:56.932 [2024-07-25 11:53:55.910417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.926506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.926584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:56.932 [2024-07-25 11:53:55.926628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.029 ms 00:28:56.932 [2024-07-25 11:53:55.926641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.942396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.942438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:56.932 [2024-07-25 11:53:55.942471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.612 ms 00:28:56.932 [2024-07-25 11:53:55.942482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.932 [2024-07-25 11:53:55.943584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.932 [2024-07-25 11:53:55.943621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:56.932 [2024-07-25 11:53:55.943638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:28:56.932 [2024-07-25 11:53:55.943685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.191 [2024-07-25 11:53:56.015344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.191 [2024-07-25 11:53:56.015422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:57.191 [2024-07-25 11:53:56.015468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.587 ms 00:28:57.191 [2024-07-25 11:53:56.015480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.191 [2024-07-25 11:53:56.026478] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:57.191 [2024-07-25 11:53:56.029338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.192 [2024-07-25 11:53:56.029374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:57.192 [2024-07-25 11:53:56.029407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.787 ms 00:28:57.192 [2024-07-25 11:53:56.029418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.192 [2024-07-25 11:53:56.029540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.192 [2024-07-25 11:53:56.029568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:57.192 [2024-07-25 11:53:56.029582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:57.192 [2024-07-25 11:53:56.029597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.192 [2024-07-25 11:53:56.030757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.192 [2024-07-25 11:53:56.030805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:57.192 [2024-07-25 11:53:56.030850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:28:57.192 [2024-07-25 11:53:56.030861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.192 [2024-07-25 11:53:56.030899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.192 [2024-07-25 11:53:56.030915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:57.192 [2024-07-25 11:53:56.030928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:57.192 [2024-07-25 11:53:56.030939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.192 [2024-07-25 11:53:56.031009] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:57.192 [2024-07-25 11:53:56.031045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.192 [2024-07-25 11:53:56.031056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:57.192 [2024-07-25 11:53:56.031067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:57.192 [2024-07-25 11:53:56.031077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.192 [2024-07-25 11:53:56.059052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.192 [2024-07-25 11:53:56.059129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:57.192 [2024-07-25 11:53:56.059183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.935 ms 00:28:57.192 [2024-07-25 11:53:56.059210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.192 [2024-07-25 11:53:56.059335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.192 [2024-07-25 11:53:56.059355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:57.192 [2024-07-25 11:53:56.059368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:57.192 [2024-07-25 11:53:56.059379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.192 [2024-07-25 11:53:56.061163] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 335.515 ms, result 0 00:29:39.836  Copying: 24/1024 [MB] (24 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (24 MBps) Copying: 119/1024 [MB] (24 MBps) Copying: 143/1024 [MB] (24 MBps) Copying: 167/1024 [MB] (23 MBps) Copying: 190/1024 [MB] (23 MBps) Copying: 215/1024 [MB] (24 MBps) Copying: 240/1024 [MB] (24 MBps) Copying: 264/1024 [MB] (23 MBps) Copying: 288/1024 [MB] (24 MBps) Copying: 312/1024 [MB] (24 MBps) Copying: 336/1024 [MB] (23 MBps) Copying: 360/1024 [MB] (24 MBps) Copying: 384/1024 [MB] (23 MBps) Copying: 408/1024 [MB] (24 MBps) Copying: 432/1024 [MB] (24 MBps) Copying: 457/1024 [MB] (24 MBps) Copying: 482/1024 [MB] (25 MBps) Copying: 506/1024 [MB] (24 MBps) Copying: 530/1024 [MB] (23 MBps) Copying: 555/1024 [MB] (24 MBps) Copying: 579/1024 [MB] (24 MBps) Copying: 605/1024 [MB] (25 MBps) Copying: 628/1024 [MB] (23 MBps) Copying: 652/1024 [MB] (24 MBps) Copying: 677/1024 [MB] (24 MBps) Copying: 701/1024 [MB] (24 MBps) Copying: 725/1024 [MB] (24 MBps) Copying: 749/1024 [MB] (23 MBps) Copying: 774/1024 [MB] (24 MBps) Copying: 797/1024 [MB] (22 MBps) Copying: 820/1024 [MB] (23 MBps) Copying: 845/1024 [MB] (24 MBps) Copying: 869/1024 [MB] (23 MBps) Copying: 893/1024 [MB] (23 MBps) Copying: 917/1024 [MB] (24 MBps) Copying: 941/1024 [MB] (24 MBps) Copying: 967/1024 [MB] (25 MBps) Copying: 991/1024 [MB] (24 MBps) Copying: 1015/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-25 11:54:38.630503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.836 [2024-07-25 11:54:38.630588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:39.836 [2024-07-25 11:54:38.630627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:39.836 [2024-07-25 11:54:38.630640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.836 [2024-07-25 11:54:38.630681] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:39.836 [2024-07-25 11:54:38.634840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.836 [2024-07-25 11:54:38.635046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:39.836 [2024-07-25 11:54:38.635211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.121 ms 00:29:39.836 [2024-07-25 11:54:38.635268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.836 [2024-07-25 11:54:38.635566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.836 [2024-07-25 11:54:38.635634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:39.836 [2024-07-25 11:54:38.635685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:29:39.836 [2024-07-25 11:54:38.635793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.836 [2024-07-25 11:54:38.639325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.836 [2024-07-25 11:54:38.639515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:39.836 [2024-07-25 11:54:38.639675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.474 ms 00:29:39.836 [2024-07-25 11:54:38.639804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.836 [2024-07-25 11:54:38.646149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.836 [2024-07-25 11:54:38.646369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:39.836 [2024-07-25 11:54:38.646507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.267 ms 00:29:39.836 [2024-07-25 11:54:38.646558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.836 [2024-07-25 11:54:38.678654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.836 [2024-07-25 11:54:38.678711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:39.836 [2024-07-25 11:54:38.678749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.869 ms 00:29:39.836 [2024-07-25 11:54:38.678760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.836 [2024-07-25 11:54:38.696476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.836 [2024-07-25 11:54:38.696526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:39.836 [2024-07-25 11:54:38.696546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.667 ms 00:29:39.836 [2024-07-25 11:54:38.696561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.837 [2024-07-25 11:54:38.700545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.837 [2024-07-25 11:54:38.700601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:39.837 [2024-07-25 11:54:38.700621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.932 ms 00:29:39.837 [2024-07-25 11:54:38.700633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.837 [2024-07-25 11:54:38.729191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.837 [2024-07-25 11:54:38.729233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:39.837 [2024-07-25 11:54:38.729265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.534 ms 00:29:39.837 [2024-07-25 11:54:38.729277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.837 [2024-07-25 11:54:38.758591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.837 [2024-07-25 11:54:38.758633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:39.837 [2024-07-25 11:54:38.758665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.269 ms 00:29:39.837 [2024-07-25 11:54:38.758677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.837 [2024-07-25 11:54:38.788077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.837 [2024-07-25 11:54:38.788122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:39.837 [2024-07-25 11:54:38.788170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.357 ms 00:29:39.837 [2024-07-25 11:54:38.788182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.837 [2024-07-25 11:54:38.816467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.837 [2024-07-25 11:54:38.816515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:39.837 [2024-07-25 11:54:38.816532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.161 ms 00:29:39.837 [2024-07-25 11:54:38.816544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.837 [2024-07-25 11:54:38.816590] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:39.837 [2024-07-25 11:54:38.816615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:39.837 [2024-07-25 11:54:38.816630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:29:39.837 [2024-07-25 11:54:38.816645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.816994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:39.837 [2024-07-25 11:54:38.817560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:39.838 [2024-07-25 11:54:38.817984] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:39.838 [2024-07-25 11:54:38.818018] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d133523e-44ee-4c5c-bfeb-b26f2a26ec1d 00:29:39.838 [2024-07-25 11:54:38.818032] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:29:39.838 [2024-07-25 11:54:38.818059] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:39.838 [2024-07-25 11:54:38.818086] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:39.838 [2024-07-25 11:54:38.818113] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:39.838 [2024-07-25 11:54:38.818124] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:39.838 [2024-07-25 11:54:38.818135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:39.838 [2024-07-25 11:54:38.818147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:39.838 [2024-07-25 11:54:38.818157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:39.838 [2024-07-25 11:54:38.818167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:39.838 [2024-07-25 11:54:38.818178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.838 [2024-07-25 11:54:38.818196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:39.838 [2024-07-25 11:54:38.818209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:29:39.838 [2024-07-25 11:54:38.818221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.838 [2024-07-25 11:54:38.835343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.838 [2024-07-25 11:54:38.835383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:39.838 [2024-07-25 11:54:38.835430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.075 ms 00:29:39.838 [2024-07-25 11:54:38.835449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.838 [2024-07-25 11:54:38.835930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.838 [2024-07-25 11:54:38.835972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:39.838 [2024-07-25 11:54:38.836019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:29:39.838 [2024-07-25 11:54:38.836046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.838 [2024-07-25 11:54:38.876139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.838 [2024-07-25 11:54:38.876185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:39.838 [2024-07-25 11:54:38.876219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.838 [2024-07-25 11:54:38.876231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.838 [2024-07-25 11:54:38.876300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.838 [2024-07-25 11:54:38.876316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:39.838 [2024-07-25 11:54:38.876334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.838 [2024-07-25 11:54:38.876355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.838 [2024-07-25 11:54:38.876507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.838 [2024-07-25 11:54:38.876529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:39.838 [2024-07-25 11:54:38.876543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.838 [2024-07-25 11:54:38.876554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.838 [2024-07-25 11:54:38.876578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.838 [2024-07-25 11:54:38.876593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:39.838 [2024-07-25 11:54:38.876605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.838 [2024-07-25 11:54:38.876626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.098 [2024-07-25 11:54:38.970713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.098 [2024-07-25 11:54:38.970784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:40.098 [2024-07-25 11:54:38.970826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.098 [2024-07-25 11:54:38.970838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.098 [2024-07-25 11:54:39.049354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.098 [2024-07-25 11:54:39.049414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:40.098 [2024-07-25 11:54:39.049456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.098 [2024-07-25 11:54:39.049469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.098 [2024-07-25 11:54:39.049561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.098 [2024-07-25 11:54:39.049579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:40.098 [2024-07-25 11:54:39.049591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.098 [2024-07-25 11:54:39.049602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.098 [2024-07-25 11:54:39.049696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.098 [2024-07-25 11:54:39.049722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:40.098 [2024-07-25 11:54:39.049735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.098 [2024-07-25 11:54:39.049746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.098 [2024-07-25 11:54:39.049877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.098 [2024-07-25 11:54:39.049898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:40.098 [2024-07-25 11:54:39.049912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.098 [2024-07-25 11:54:39.049924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.098 [2024-07-25 11:54:39.050057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.098 [2024-07-25 11:54:39.050077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:40.098 [2024-07-25 11:54:39.050090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.098 [2024-07-25 11:54:39.050116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.098 [2024-07-25 11:54:39.050182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.098 [2024-07-25 11:54:39.050200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:40.098 [2024-07-25 11:54:39.050213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.098 [2024-07-25 11:54:39.050224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.098 [2024-07-25 11:54:39.050282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.098 [2024-07-25 11:54:39.050299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:40.098 [2024-07-25 11:54:39.050312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.098 [2024-07-25 11:54:39.050323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.098 [2024-07-25 11:54:39.050549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 419.991 ms, result 0 00:29:41.033 00:29:41.033 00:29:41.291 11:54:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:43.821 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:43.821 Process with pid 82655 is not found 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82655 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82655 ']' 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 82655 00:29:43.821 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (82655) - No such process 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 82655 is not found' 00:29:43.821 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:44.080 Remove shared memory files 00:29:44.080 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:44.080 11:54:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:44.080 11:54:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:44.080 11:54:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:44.080 11:54:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:44.080 11:54:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:44.080 11:54:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:44.080 ************************************ 00:29:44.080 END TEST ftl_dirty_shutdown 00:29:44.080 ************************************ 00:29:44.080 00:29:44.080 real 4m8.993s 00:29:44.080 user 4m47.821s 00:29:44.080 sys 0m38.039s 00:29:44.080 11:54:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:44.080 11:54:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:44.080 11:54:42 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:44.080 11:54:42 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:44.080 11:54:42 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:44.080 11:54:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:44.080 ************************************ 00:29:44.080 START TEST ftl_upgrade_shutdown 00:29:44.080 ************************************ 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:44.080 * Looking for test storage... 00:29:44.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85215 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85215 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85215 ']' 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:44.080 11:54:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:44.338 [2024-07-25 11:54:43.229404] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:44.338 [2024-07-25 11:54:43.229799] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85215 ] 00:29:44.596 [2024-07-25 11:54:43.403446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.853 [2024-07-25 11:54:43.689858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.787 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:45.788 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.788 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:45.788 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:45.788 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:45.788 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:45.788 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:45.788 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:45.788 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:46.047 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:46.047 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:46.047 11:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:46.047 11:54:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:29:46.047 11:54:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:46.047 11:54:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:29:46.047 11:54:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:29:46.047 11:54:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:46.306 { 00:29:46.306 "name": "basen1", 00:29:46.306 "aliases": [ 00:29:46.306 "774ea937-34da-4dba-97ba-778e17d99b3c" 00:29:46.306 ], 00:29:46.306 "product_name": "NVMe disk", 00:29:46.306 "block_size": 4096, 00:29:46.306 "num_blocks": 1310720, 00:29:46.306 "uuid": "774ea937-34da-4dba-97ba-778e17d99b3c", 00:29:46.306 "assigned_rate_limits": { 00:29:46.306 "rw_ios_per_sec": 0, 00:29:46.306 "rw_mbytes_per_sec": 0, 00:29:46.306 "r_mbytes_per_sec": 0, 00:29:46.306 "w_mbytes_per_sec": 0 00:29:46.306 }, 00:29:46.306 "claimed": true, 00:29:46.306 "claim_type": "read_many_write_one", 00:29:46.306 "zoned": false, 00:29:46.306 "supported_io_types": { 00:29:46.306 "read": true, 00:29:46.306 "write": true, 00:29:46.306 "unmap": true, 00:29:46.306 "flush": true, 00:29:46.306 "reset": true, 00:29:46.306 "nvme_admin": true, 00:29:46.306 "nvme_io": true, 00:29:46.306 "nvme_io_md": false, 00:29:46.306 "write_zeroes": true, 00:29:46.306 "zcopy": false, 00:29:46.306 "get_zone_info": false, 00:29:46.306 "zone_management": false, 00:29:46.306 "zone_append": false, 00:29:46.306 "compare": true, 00:29:46.306 "compare_and_write": false, 00:29:46.306 "abort": true, 00:29:46.306 "seek_hole": false, 00:29:46.306 "seek_data": false, 00:29:46.306 "copy": true, 00:29:46.306 "nvme_iov_md": false 00:29:46.306 }, 00:29:46.306 "driver_specific": { 00:29:46.306 "nvme": [ 00:29:46.306 { 00:29:46.306 "pci_address": "0000:00:11.0", 00:29:46.306 "trid": { 00:29:46.306 "trtype": "PCIe", 00:29:46.306 "traddr": "0000:00:11.0" 00:29:46.306 }, 00:29:46.306 "ctrlr_data": { 00:29:46.306 "cntlid": 0, 00:29:46.306 "vendor_id": "0x1b36", 00:29:46.306 "model_number": "QEMU NVMe Ctrl", 00:29:46.306 "serial_number": "12341", 00:29:46.306 "firmware_revision": "8.0.0", 00:29:46.306 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:46.306 "oacs": { 00:29:46.306 "security": 0, 00:29:46.306 "format": 1, 00:29:46.306 "firmware": 0, 00:29:46.306 "ns_manage": 1 00:29:46.306 }, 00:29:46.306 "multi_ctrlr": false, 00:29:46.306 "ana_reporting": false 00:29:46.306 }, 00:29:46.306 "vs": { 00:29:46.306 "nvme_version": "1.4" 00:29:46.306 }, 00:29:46.306 "ns_data": { 00:29:46.306 "id": 1, 00:29:46.306 "can_share": false 00:29:46.306 } 00:29:46.306 } 00:29:46.306 ], 00:29:46.306 "mp_policy": "active_passive" 00:29:46.306 } 00:29:46.306 } 00:29:46.306 ]' 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:46.306 11:54:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:46.564 11:54:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=67c7a327-3336-4479-93a7-f8ddca5f40bf 00:29:46.564 11:54:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:46.564 11:54:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 67c7a327-3336-4479-93a7-f8ddca5f40bf 00:29:46.822 11:54:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:47.082 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=0f37c6bd-d428-415d-b655-cd5e7bcc461f 00:29:47.082 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0f37c6bd-d428-415d-b655-cd5e7bcc461f 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=cc5de25d-6ea4-4888-915b-b1f8399d04b6 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z cc5de25d-6ea4-4888-915b-b1f8399d04b6 ]] 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 cc5de25d-6ea4-4888-915b-b1f8399d04b6 5120 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=cc5de25d-6ea4-4888-915b-b1f8399d04b6 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size cc5de25d-6ea4-4888-915b-b1f8399d04b6 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=cc5de25d-6ea4-4888-915b-b1f8399d04b6 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:29:47.352 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cc5de25d-6ea4-4888-915b-b1f8399d04b6 00:29:47.609 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:47.609 { 00:29:47.609 "name": "cc5de25d-6ea4-4888-915b-b1f8399d04b6", 00:29:47.610 "aliases": [ 00:29:47.610 "lvs/basen1p0" 00:29:47.610 ], 00:29:47.610 "product_name": "Logical Volume", 00:29:47.610 "block_size": 4096, 00:29:47.610 "num_blocks": 5242880, 00:29:47.610 "uuid": "cc5de25d-6ea4-4888-915b-b1f8399d04b6", 00:29:47.610 "assigned_rate_limits": { 00:29:47.610 "rw_ios_per_sec": 0, 00:29:47.610 "rw_mbytes_per_sec": 0, 00:29:47.610 "r_mbytes_per_sec": 0, 00:29:47.610 "w_mbytes_per_sec": 0 00:29:47.610 }, 00:29:47.610 "claimed": false, 00:29:47.610 "zoned": false, 00:29:47.610 "supported_io_types": { 00:29:47.610 "read": true, 00:29:47.610 "write": true, 00:29:47.610 "unmap": true, 00:29:47.610 "flush": false, 00:29:47.610 "reset": true, 00:29:47.610 "nvme_admin": false, 00:29:47.610 "nvme_io": false, 00:29:47.610 "nvme_io_md": false, 00:29:47.610 "write_zeroes": true, 00:29:47.610 "zcopy": false, 00:29:47.610 "get_zone_info": false, 00:29:47.610 "zone_management": false, 00:29:47.610 "zone_append": false, 00:29:47.610 "compare": false, 00:29:47.610 "compare_and_write": false, 00:29:47.610 "abort": false, 00:29:47.610 "seek_hole": true, 00:29:47.610 "seek_data": true, 00:29:47.610 "copy": false, 00:29:47.610 "nvme_iov_md": false 00:29:47.610 }, 00:29:47.610 "driver_specific": { 00:29:47.610 "lvol": { 00:29:47.610 "lvol_store_uuid": "0f37c6bd-d428-415d-b655-cd5e7bcc461f", 00:29:47.610 "base_bdev": "basen1", 00:29:47.610 "thin_provision": true, 00:29:47.610 "num_allocated_clusters": 0, 00:29:47.610 "snapshot": false, 00:29:47.610 "clone": false, 00:29:47.610 "esnap_clone": false 00:29:47.610 } 00:29:47.610 } 00:29:47.610 } 00:29:47.610 ]' 00:29:47.610 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:47.610 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:29:47.610 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:47.610 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:29:47.610 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:29:47.610 11:54:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:29:47.610 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:47.610 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:47.610 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:48.175 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:48.175 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:48.175 11:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:48.175 11:54:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:48.175 11:54:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:48.175 11:54:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d cc5de25d-6ea4-4888-915b-b1f8399d04b6 -c cachen1p0 --l2p_dram_limit 2 00:29:48.743 [2024-07-25 11:54:47.501980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.502057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:48.743 [2024-07-25 11:54:47.502083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:48.743 [2024-07-25 11:54:47.502099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.502193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.502217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:48.743 [2024-07-25 11:54:47.502231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:29:48.743 [2024-07-25 11:54:47.502246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.502286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:48.743 [2024-07-25 11:54:47.503383] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:48.743 [2024-07-25 11:54:47.503428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.503452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:48.743 [2024-07-25 11:54:47.503467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.158 ms 00:29:48.743 [2024-07-25 11:54:47.503483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.503626] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 290b9015-b66f-4a42-915e-8a7920490245 00:29:48.743 [2024-07-25 11:54:47.505478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.505520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:48.743 [2024-07-25 11:54:47.505542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:48.743 [2024-07-25 11:54:47.505555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.515004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.515065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:48.743 [2024-07-25 11:54:47.515089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.357 ms 00:29:48.743 [2024-07-25 11:54:47.515103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.515187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.515208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:48.743 [2024-07-25 11:54:47.515225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:48.743 [2024-07-25 11:54:47.515237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.515384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.515405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:48.743 [2024-07-25 11:54:47.515424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:48.743 [2024-07-25 11:54:47.515450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.515493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:48.743 [2024-07-25 11:54:47.520757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.520803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:48.743 [2024-07-25 11:54:47.520821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.280 ms 00:29:48.743 [2024-07-25 11:54:47.520837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.520879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.520899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:48.743 [2024-07-25 11:54:47.520913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:48.743 [2024-07-25 11:54:47.520953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.521002] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:48.743 [2024-07-25 11:54:47.521205] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:48.743 [2024-07-25 11:54:47.521232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:48.743 [2024-07-25 11:54:47.521258] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:48.743 [2024-07-25 11:54:47.521276] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:48.743 [2024-07-25 11:54:47.521293] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:48.743 [2024-07-25 11:54:47.521307] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:48.743 [2024-07-25 11:54:47.521324] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:48.743 [2024-07-25 11:54:47.521337] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:48.743 [2024-07-25 11:54:47.521353] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:48.743 [2024-07-25 11:54:47.521367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.521382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:48.743 [2024-07-25 11:54:47.521395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:29:48.743 [2024-07-25 11:54:47.521409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.521507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-07-25 11:54:47.521533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:48.743 [2024-07-25 11:54:47.521546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:29:48.743 [2024-07-25 11:54:47.521564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-07-25 11:54:47.521679] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:48.743 [2024-07-25 11:54:47.521704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:48.743 [2024-07-25 11:54:47.521717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:48.743 [2024-07-25 11:54:47.521731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.743 [2024-07-25 11:54:47.521743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:48.743 [2024-07-25 11:54:47.521756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:48.743 [2024-07-25 11:54:47.521781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:48.743 [2024-07-25 11:54:47.521795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:48.743 [2024-07-25 11:54:47.521806] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:48.743 [2024-07-25 11:54:47.521819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.743 [2024-07-25 11:54:47.521829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:48.743 [2024-07-25 11:54:47.521842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:48.743 [2024-07-25 11:54:47.521853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.743 [2024-07-25 11:54:47.521868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:48.743 [2024-07-25 11:54:47.521880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:48.743 [2024-07-25 11:54:47.521894] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.743 [2024-07-25 11:54:47.521905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:48.743 [2024-07-25 11:54:47.521935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:48.743 [2024-07-25 11:54:47.521950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.743 [2024-07-25 11:54:47.521964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:48.743 [2024-07-25 11:54:47.521975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:48.743 [2024-07-25 11:54:47.521989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:48.743 [2024-07-25 11:54:47.522000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:48.743 [2024-07-25 11:54:47.522013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:48.743 [2024-07-25 11:54:47.522026] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:48.743 [2024-07-25 11:54:47.522041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:48.743 [2024-07-25 11:54:47.522052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:48.743 [2024-07-25 11:54:47.522066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:48.744 [2024-07-25 11:54:47.522077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:48.744 [2024-07-25 11:54:47.522090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:48.744 [2024-07-25 11:54:47.522102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:48.744 [2024-07-25 11:54:47.522115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:48.744 [2024-07-25 11:54:47.522127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:48.744 [2024-07-25 11:54:47.522143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-07-25 11:54:47.522155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:48.744 [2024-07-25 11:54:47.522169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:48.744 [2024-07-25 11:54:47.522181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-07-25 11:54:47.522194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:48.744 [2024-07-25 11:54:47.522205] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:48.744 [2024-07-25 11:54:47.522220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-07-25 11:54:47.522232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:48.744 [2024-07-25 11:54:47.522245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:48.744 [2024-07-25 11:54:47.522257] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-07-25 11:54:47.522270] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:48.744 [2024-07-25 11:54:47.522283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:48.744 [2024-07-25 11:54:47.522298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:48.744 [2024-07-25 11:54:47.522310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-07-25 11:54:47.522325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:48.744 [2024-07-25 11:54:47.522337] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:48.744 [2024-07-25 11:54:47.522353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:48.744 [2024-07-25 11:54:47.522365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:48.744 [2024-07-25 11:54:47.522379] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:48.744 [2024-07-25 11:54:47.522391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:48.744 [2024-07-25 11:54:47.522418] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:48.744 [2024-07-25 11:54:47.522438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:48.744 [2024-07-25 11:54:47.522474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:48.744 [2024-07-25 11:54:47.522518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:48.744 [2024-07-25 11:54:47.522531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:48.744 [2024-07-25 11:54:47.522546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:48.744 [2024-07-25 11:54:47.522558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:48.744 [2024-07-25 11:54:47.522665] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:48.744 [2024-07-25 11:54:47.522679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:48.744 [2024-07-25 11:54:47.522708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:48.744 [2024-07-25 11:54:47.522723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:48.744 [2024-07-25 11:54:47.522736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:48.744 [2024-07-25 11:54:47.522752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.744 [2024-07-25 11:54:47.522769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:48.744 [2024-07-25 11:54:47.522785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.134 ms 00:29:48.744 [2024-07-25 11:54:47.522797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.744 [2024-07-25 11:54:47.522859] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:48.744 [2024-07-25 11:54:47.522877] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:51.274 [2024-07-25 11:54:50.182428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.274 [2024-07-25 11:54:50.182528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:51.274 [2024-07-25 11:54:50.182558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2659.555 ms 00:29:51.274 [2024-07-25 11:54:50.182580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.274 [2024-07-25 11:54:50.222053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.274 [2024-07-25 11:54:50.222115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:51.274 [2024-07-25 11:54:50.222157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.170 ms 00:29:51.274 [2024-07-25 11:54:50.222170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.274 [2024-07-25 11:54:50.222318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.274 [2024-07-25 11:54:50.222337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:51.274 [2024-07-25 11:54:50.222358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:51.274 [2024-07-25 11:54:50.222370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.274 [2024-07-25 11:54:50.265706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.274 [2024-07-25 11:54:50.265762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:51.274 [2024-07-25 11:54:50.265801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.240 ms 00:29:51.274 [2024-07-25 11:54:50.265813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.274 [2024-07-25 11:54:50.265897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.274 [2024-07-25 11:54:50.265914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:51.274 [2024-07-25 11:54:50.265935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:51.274 [2024-07-25 11:54:50.265966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.274 [2024-07-25 11:54:50.266626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.274 [2024-07-25 11:54:50.266652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:51.274 [2024-07-25 11:54:50.266670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:29:51.274 [2024-07-25 11:54:50.266697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.274 [2024-07-25 11:54:50.266765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.274 [2024-07-25 11:54:50.266786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:51.274 [2024-07-25 11:54:50.266802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:51.274 [2024-07-25 11:54:50.266831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.274 [2024-07-25 11:54:50.286891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.274 [2024-07-25 11:54:50.286964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:51.274 [2024-07-25 11:54:50.287004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.017 ms 00:29:51.274 [2024-07-25 11:54:50.287016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.274 [2024-07-25 11:54:50.301708] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:51.274 [2024-07-25 11:54:50.303211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.274 [2024-07-25 11:54:50.303254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:51.274 [2024-07-25 11:54:50.303274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.058 ms 00:29:51.274 [2024-07-25 11:54:50.303289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.532 [2024-07-25 11:54:50.344823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.532 [2024-07-25 11:54:50.344892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:51.532 [2024-07-25 11:54:50.344915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.482 ms 00:29:51.532 [2024-07-25 11:54:50.344953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.532 [2024-07-25 11:54:50.345077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.532 [2024-07-25 11:54:50.345117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:51.532 [2024-07-25 11:54:50.345132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:29:51.532 [2024-07-25 11:54:50.345150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.532 [2024-07-25 11:54:50.375002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.532 [2024-07-25 11:54:50.375077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:51.532 [2024-07-25 11:54:50.375111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.779 ms 00:29:51.532 [2024-07-25 11:54:50.375130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.532 [2024-07-25 11:54:50.403831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.532 [2024-07-25 11:54:50.403892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:51.532 [2024-07-25 11:54:50.403910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.654 ms 00:29:51.532 [2024-07-25 11:54:50.403923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.532 [2024-07-25 11:54:50.404885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.532 [2024-07-25 11:54:50.404948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:51.532 [2024-07-25 11:54:50.404969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.884 ms 00:29:51.532 [2024-07-25 11:54:50.404984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.532 [2024-07-25 11:54:50.498819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.532 [2024-07-25 11:54:50.498931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:51.533 [2024-07-25 11:54:50.498984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.769 ms 00:29:51.533 [2024-07-25 11:54:50.499004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.533 [2024-07-25 11:54:50.530197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.533 [2024-07-25 11:54:50.530261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:51.533 [2024-07-25 11:54:50.530281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.113 ms 00:29:51.533 [2024-07-25 11:54:50.530296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.533 [2024-07-25 11:54:50.560755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.533 [2024-07-25 11:54:50.560815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:51.533 [2024-07-25 11:54:50.560845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.410 ms 00:29:51.533 [2024-07-25 11:54:50.560859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.791 [2024-07-25 11:54:50.591759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.791 [2024-07-25 11:54:50.591824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:51.791 [2024-07-25 11:54:50.591848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.856 ms 00:29:51.791 [2024-07-25 11:54:50.591863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.791 [2024-07-25 11:54:50.591915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.791 [2024-07-25 11:54:50.591972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:51.791 [2024-07-25 11:54:50.591987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:51.791 [2024-07-25 11:54:50.592006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.791 [2024-07-25 11:54:50.592149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.791 [2024-07-25 11:54:50.592177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:51.791 [2024-07-25 11:54:50.592190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:29:51.791 [2024-07-25 11:54:50.592205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.791 [2024-07-25 11:54:50.593606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3091.013 ms, result 0 00:29:51.791 { 00:29:51.791 "name": "ftl", 00:29:51.791 "uuid": "290b9015-b66f-4a42-915e-8a7920490245" 00:29:51.791 } 00:29:51.791 11:54:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:52.049 [2024-07-25 11:54:50.872548] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.049 11:54:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:52.307 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:52.565 [2024-07-25 11:54:51.393309] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:52.565 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:52.824 [2024-07-25 11:54:51.684161] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:52.824 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:53.082 Fill FTL, iteration 1 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85343 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85343 /var/tmp/spdk.tgt.sock 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85343 ']' 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:53.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.082 11:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:53.341 [2024-07-25 11:54:52.227160] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:53.341 [2024-07-25 11:54:52.227681] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85343 ] 00:29:53.600 [2024-07-25 11:54:52.398642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.858 [2024-07-25 11:54:52.677630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.534 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.534 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:54.534 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:54.792 ftln1 00:29:54.792 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:54.792 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85343 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85343 ']' 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85343 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85343 00:29:55.049 killing process with pid 85343 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85343' 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85343 00:29:55.049 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85343 00:29:57.576 11:54:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:57.576 11:54:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:57.576 [2024-07-25 11:54:56.265183] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:57.576 [2024-07-25 11:54:56.265400] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85396 ] 00:29:57.576 [2024-07-25 11:54:56.437675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.833 [2024-07-25 11:54:56.693586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.421  Copying: 210/1024 [MB] (210 MBps) Copying: 423/1024 [MB] (213 MBps) Copying: 637/1024 [MB] (214 MBps) Copying: 848/1024 [MB] (211 MBps) Copying: 1024/1024 [MB] (average 211 MBps) 00:30:04.421 00:30:04.421 Calculate MD5 checksum, iteration 1 00:30:04.421 11:55:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:04.421 11:55:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:04.421 11:55:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:04.421 11:55:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:04.421 11:55:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:04.421 11:55:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:04.421 11:55:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:04.421 11:55:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:04.421 [2024-07-25 11:55:03.279138] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:04.421 [2024-07-25 11:55:03.279331] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85466 ] 00:30:04.421 [2024-07-25 11:55:03.456160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.680 [2024-07-25 11:55:03.691144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.445  Copying: 501/1024 [MB] (501 MBps) Copying: 1003/1024 [MB] (502 MBps) Copying: 1024/1024 [MB] (average 501 MBps) 00:30:08.445 00:30:08.445 11:55:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:08.445 11:55:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:10.999 Fill FTL, iteration 2 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=996042ad09dfe99b9d31015b747318d2 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:10.999 11:55:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:10.999 [2024-07-25 11:55:09.584207] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:10.999 [2024-07-25 11:55:09.584444] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85534 ] 00:30:10.999 [2024-07-25 11:55:09.768568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.272 [2024-07-25 11:55:10.049730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.833  Copying: 200/1024 [MB] (200 MBps) Copying: 401/1024 [MB] (201 MBps) Copying: 611/1024 [MB] (210 MBps) Copying: 822/1024 [MB] (211 MBps) Copying: 1024/1024 [MB] (average 206 MBps) 00:30:17.833 00:30:17.833 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:17.833 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:17.833 Calculate MD5 checksum, iteration 2 00:30:17.833 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:17.833 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:17.833 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:17.833 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:17.833 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:17.833 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:17.833 [2024-07-25 11:55:16.857087] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:17.833 [2024-07-25 11:55:16.857907] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85604 ] 00:30:18.091 [2024-07-25 11:55:17.046437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.349 [2024-07-25 11:55:17.301284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.084  Copying: 470/1024 [MB] (470 MBps) Copying: 945/1024 [MB] (475 MBps) Copying: 1024/1024 [MB] (average 466 MBps) 00:30:23.084 00:30:23.084 11:55:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:23.084 11:55:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:24.999 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:24.999 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0cf2b616595cf45ac64a3f0691fcc803 00:30:24.999 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:24.999 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:24.999 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:25.276 [2024-07-25 11:55:24.203468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.276 [2024-07-25 11:55:24.203544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:25.276 [2024-07-25 11:55:24.203585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:25.276 [2024-07-25 11:55:24.203605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.276 [2024-07-25 11:55:24.203679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.276 [2024-07-25 11:55:24.203696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:25.276 [2024-07-25 11:55:24.203726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:25.276 [2024-07-25 11:55:24.203755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.276 [2024-07-25 11:55:24.203801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.276 [2024-07-25 11:55:24.203818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:25.276 [2024-07-25 11:55:24.203832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:25.276 [2024-07-25 11:55:24.203844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.276 [2024-07-25 11:55:24.203942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.452 ms, result 0 00:30:25.276 true 00:30:25.276 11:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:25.542 { 00:30:25.542 "name": "ftl", 00:30:25.542 "properties": [ 00:30:25.542 { 00:30:25.542 "name": "superblock_version", 00:30:25.542 "value": 5, 00:30:25.542 "read-only": true 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "name": "base_device", 00:30:25.542 "bands": [ 00:30:25.542 { 00:30:25.542 "id": 0, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 1, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 2, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 3, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 4, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 5, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 6, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 7, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 8, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 9, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 10, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 11, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 12, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 13, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 14, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 15, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 16, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 17, 00:30:25.542 "state": "FREE", 00:30:25.542 "validity": 0.0 00:30:25.542 } 00:30:25.542 ], 00:30:25.542 "read-only": true 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "name": "cache_device", 00:30:25.542 "type": "bdev", 00:30:25.542 "chunks": [ 00:30:25.542 { 00:30:25.542 "id": 0, 00:30:25.542 "state": "INACTIVE", 00:30:25.542 "utilization": 0.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 1, 00:30:25.542 "state": "CLOSED", 00:30:25.542 "utilization": 1.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 2, 00:30:25.542 "state": "CLOSED", 00:30:25.542 "utilization": 1.0 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 3, 00:30:25.542 "state": "OPEN", 00:30:25.542 "utilization": 0.001953125 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "id": 4, 00:30:25.542 "state": "OPEN", 00:30:25.542 "utilization": 0.0 00:30:25.542 } 00:30:25.542 ], 00:30:25.542 "read-only": true 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "name": "verbose_mode", 00:30:25.542 "value": true, 00:30:25.542 "unit": "", 00:30:25.542 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:25.542 }, 00:30:25.542 { 00:30:25.542 "name": "prep_upgrade_on_shutdown", 00:30:25.542 "value": false, 00:30:25.542 "unit": "", 00:30:25.542 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:25.542 } 00:30:25.542 ] 00:30:25.542 } 00:30:25.542 11:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:25.801 [2024-07-25 11:55:24.700139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.801 [2024-07-25 11:55:24.700213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:25.801 [2024-07-25 11:55:24.700238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:25.801 [2024-07-25 11:55:24.700256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.801 [2024-07-25 11:55:24.700297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.801 [2024-07-25 11:55:24.700315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:25.801 [2024-07-25 11:55:24.700329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:25.801 [2024-07-25 11:55:24.700341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.801 [2024-07-25 11:55:24.700369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.801 [2024-07-25 11:55:24.700385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:25.801 [2024-07-25 11:55:24.700398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:25.801 [2024-07-25 11:55:24.700410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.801 [2024-07-25 11:55:24.700509] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.384 ms, result 0 00:30:25.801 true 00:30:25.801 11:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:25.801 11:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:25.801 11:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:26.060 11:55:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:26.060 11:55:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:26.060 11:55:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:26.319 [2024-07-25 11:55:25.247264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.319 [2024-07-25 11:55:25.247351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:26.319 [2024-07-25 11:55:25.247390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:26.319 [2024-07-25 11:55:25.247402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.319 [2024-07-25 11:55:25.247474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.319 [2024-07-25 11:55:25.247492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:26.319 [2024-07-25 11:55:25.247505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:26.319 [2024-07-25 11:55:25.247517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.319 [2024-07-25 11:55:25.247545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.319 [2024-07-25 11:55:25.247561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:26.319 [2024-07-25 11:55:25.247574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:26.319 [2024-07-25 11:55:25.247586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.319 [2024-07-25 11:55:25.247670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.395 ms, result 0 00:30:26.319 true 00:30:26.319 11:55:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.578 { 00:30:26.578 "name": "ftl", 00:30:26.578 "properties": [ 00:30:26.578 { 00:30:26.578 "name": "superblock_version", 00:30:26.578 "value": 5, 00:30:26.578 "read-only": true 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "name": "base_device", 00:30:26.578 "bands": [ 00:30:26.578 { 00:30:26.578 "id": 0, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 1, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 2, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 3, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 4, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 5, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 6, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 7, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 8, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 9, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 10, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 11, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 12, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 13, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 14, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 15, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 16, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 17, 00:30:26.578 "state": "FREE", 00:30:26.578 "validity": 0.0 00:30:26.578 } 00:30:26.578 ], 00:30:26.578 "read-only": true 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "name": "cache_device", 00:30:26.578 "type": "bdev", 00:30:26.578 "chunks": [ 00:30:26.578 { 00:30:26.578 "id": 0, 00:30:26.578 "state": "INACTIVE", 00:30:26.578 "utilization": 0.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 1, 00:30:26.578 "state": "CLOSED", 00:30:26.578 "utilization": 1.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 2, 00:30:26.578 "state": "CLOSED", 00:30:26.578 "utilization": 1.0 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 3, 00:30:26.578 "state": "OPEN", 00:30:26.578 "utilization": 0.001953125 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "id": 4, 00:30:26.578 "state": "OPEN", 00:30:26.578 "utilization": 0.0 00:30:26.578 } 00:30:26.578 ], 00:30:26.578 "read-only": true 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "name": "verbose_mode", 00:30:26.578 "value": true, 00:30:26.578 "unit": "", 00:30:26.578 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:26.578 }, 00:30:26.578 { 00:30:26.578 "name": "prep_upgrade_on_shutdown", 00:30:26.578 "value": true, 00:30:26.578 "unit": "", 00:30:26.578 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:26.578 } 00:30:26.578 ] 00:30:26.578 } 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85215 ]] 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85215 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85215 ']' 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85215 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85215 00:30:26.578 killing process with pid 85215 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85215' 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85215 00:30:26.578 11:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85215 00:30:27.955 [2024-07-25 11:55:26.647102] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:27.955 [2024-07-25 11:55:26.666510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.955 [2024-07-25 11:55:26.666566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:27.955 [2024-07-25 11:55:26.666604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:27.955 [2024-07-25 11:55:26.666616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.956 [2024-07-25 11:55:26.666648] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:27.956 [2024-07-25 11:55:26.670341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.956 [2024-07-25 11:55:26.670392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:27.956 [2024-07-25 11:55:26.670431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.655 ms 00:30:27.956 [2024-07-25 11:55:26.670443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.479949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.169 [2024-07-25 11:55:37.480055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:40.169 [2024-07-25 11:55:37.480083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10809.502 ms 00:30:40.169 [2024-07-25 11:55:37.480097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.481571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.169 [2024-07-25 11:55:37.481616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:40.169 [2024-07-25 11:55:37.481634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.445 ms 00:30:40.169 [2024-07-25 11:55:37.481647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.482850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.169 [2024-07-25 11:55:37.482885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:40.169 [2024-07-25 11:55:37.482911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.160 ms 00:30:40.169 [2024-07-25 11:55:37.482937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.495947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.169 [2024-07-25 11:55:37.495997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:40.169 [2024-07-25 11:55:37.496016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.958 ms 00:30:40.169 [2024-07-25 11:55:37.496029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.504003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.169 [2024-07-25 11:55:37.504068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:40.169 [2024-07-25 11:55:37.504089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.928 ms 00:30:40.169 [2024-07-25 11:55:37.504102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.504227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.169 [2024-07-25 11:55:37.504248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:40.169 [2024-07-25 11:55:37.504263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:30:40.169 [2024-07-25 11:55:37.504277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.516375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.169 [2024-07-25 11:55:37.516432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:30:40.169 [2024-07-25 11:55:37.516451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.062 ms 00:30:40.169 [2024-07-25 11:55:37.516463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.528536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.169 [2024-07-25 11:55:37.528596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:30:40.169 [2024-07-25 11:55:37.528615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.025 ms 00:30:40.169 [2024-07-25 11:55:37.528627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.540721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.169 [2024-07-25 11:55:37.540768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:40.169 [2024-07-25 11:55:37.540787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.047 ms 00:30:40.169 [2024-07-25 11:55:37.540799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.552735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.169 [2024-07-25 11:55:37.552782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:40.169 [2024-07-25 11:55:37.552800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.829 ms 00:30:40.169 [2024-07-25 11:55:37.552812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.169 [2024-07-25 11:55:37.552859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:40.169 [2024-07-25 11:55:37.552886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:40.169 [2024-07-25 11:55:37.552904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:40.169 [2024-07-25 11:55:37.552945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:40.169 [2024-07-25 11:55:37.552963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:40.169 [2024-07-25 11:55:37.552977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:40.169 [2024-07-25 11:55:37.552990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:40.170 [2024-07-25 11:55:37.553190] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:40.170 [2024-07-25 11:55:37.553203] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 290b9015-b66f-4a42-915e-8a7920490245 00:30:40.170 [2024-07-25 11:55:37.553216] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:40.170 [2024-07-25 11:55:37.553229] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:40.170 [2024-07-25 11:55:37.553247] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:40.170 [2024-07-25 11:55:37.553261] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:40.170 [2024-07-25 11:55:37.553273] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:40.170 [2024-07-25 11:55:37.553287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:40.170 [2024-07-25 11:55:37.553306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:40.170 [2024-07-25 11:55:37.553317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:40.170 [2024-07-25 11:55:37.553327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:40.170 [2024-07-25 11:55:37.553339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.170 [2024-07-25 11:55:37.553352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:40.170 [2024-07-25 11:55:37.553364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.483 ms 00:30:40.170 [2024-07-25 11:55:37.553378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.570561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.170 [2024-07-25 11:55:37.570633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:40.170 [2024-07-25 11:55:37.570654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.153 ms 00:30:40.170 [2024-07-25 11:55:37.570667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.571190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.170 [2024-07-25 11:55:37.571210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:40.170 [2024-07-25 11:55:37.571225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.478 ms 00:30:40.170 [2024-07-25 11:55:37.571237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.624871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.624976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:40.170 [2024-07-25 11:55:37.625014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.625027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.625106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.625124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:40.170 [2024-07-25 11:55:37.625138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.625156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.625324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.625352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:40.170 [2024-07-25 11:55:37.625366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.625378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.625418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.625434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:40.170 [2024-07-25 11:55:37.625448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.625459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.734653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.734748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:40.170 [2024-07-25 11:55:37.734770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.734784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.823179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.823278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:40.170 [2024-07-25 11:55:37.823302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.823316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.823481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.823503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:40.170 [2024-07-25 11:55:37.823531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.823544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.823609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.823628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:40.170 [2024-07-25 11:55:37.823642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.823654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.823795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.823821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:40.170 [2024-07-25 11:55:37.823835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.823854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.823911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.823966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:40.170 [2024-07-25 11:55:37.823982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.823994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.824069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.824088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:40.170 [2024-07-25 11:55:37.824102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.824122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.824198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:40.170 [2024-07-25 11:55:37.824216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:40.170 [2024-07-25 11:55:37.824229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:40.170 [2024-07-25 11:55:37.824242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.170 [2024-07-25 11:55:37.824415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 11157.903 ms, result 0 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:42.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85843 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85843 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85843 ']' 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:42.705 11:55:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:42.963 [2024-07-25 11:55:41.835153] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:42.963 [2024-07-25 11:55:41.835840] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85843 ] 00:30:43.221 [2024-07-25 11:55:42.035685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.479 [2024-07-25 11:55:42.326864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.413 [2024-07-25 11:55:43.273634] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:44.413 [2024-07-25 11:55:43.274069] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:44.413 [2024-07-25 11:55:43.428535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-07-25 11:55:43.428882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:44.413 [2024-07-25 11:55:43.429068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:44.413 [2024-07-25 11:55:43.429205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-07-25 11:55:43.429377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-07-25 11:55:43.429466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:44.413 [2024-07-25 11:55:43.429597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.086 ms 00:30:44.413 [2024-07-25 11:55:43.429650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-07-25 11:55:43.429824] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:44.414 [2024-07-25 11:55:43.431035] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:44.414 [2024-07-25 11:55:43.431200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.414 [2024-07-25 11:55:43.431222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:44.414 [2024-07-25 11:55:43.431236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.401 ms 00:30:44.414 [2024-07-25 11:55:43.431258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-07-25 11:55:43.433604] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:44.414 [2024-07-25 11:55:43.451456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.414 [2024-07-25 11:55:43.451827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:44.414 [2024-07-25 11:55:43.452023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.845 ms 00:30:44.414 [2024-07-25 11:55:43.452172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-07-25 11:55:43.452341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.414 [2024-07-25 11:55:43.452367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:44.414 [2024-07-25 11:55:43.452393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:30:44.414 [2024-07-25 11:55:43.452406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-07-25 11:55:43.462577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.414 [2024-07-25 11:55:43.462640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:44.414 [2024-07-25 11:55:43.462661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.956 ms 00:30:44.414 [2024-07-25 11:55:43.462674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-07-25 11:55:43.462798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.414 [2024-07-25 11:55:43.462821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:44.414 [2024-07-25 11:55:43.462841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:30:44.414 [2024-07-25 11:55:43.462854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-07-25 11:55:43.463010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.414 [2024-07-25 11:55:43.463032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:44.414 [2024-07-25 11:55:43.463050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:44.414 [2024-07-25 11:55:43.463062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-07-25 11:55:43.463107] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:44.672 [2024-07-25 11:55:43.468456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.672 [2024-07-25 11:55:43.468498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:44.672 [2024-07-25 11:55:43.468516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.361 ms 00:30:44.672 [2024-07-25 11:55:43.468529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.672 [2024-07-25 11:55:43.468589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.672 [2024-07-25 11:55:43.468608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:44.672 [2024-07-25 11:55:43.468626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:44.672 [2024-07-25 11:55:43.468638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.673 [2024-07-25 11:55:43.468697] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:44.673 [2024-07-25 11:55:43.468742] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:44.673 [2024-07-25 11:55:43.468802] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:44.673 [2024-07-25 11:55:43.468824] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:30:44.673 [2024-07-25 11:55:43.468956] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:44.673 [2024-07-25 11:55:43.468982] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:44.673 [2024-07-25 11:55:43.468998] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:30:44.673 [2024-07-25 11:55:43.469014] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:44.673 [2024-07-25 11:55:43.469029] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:44.673 [2024-07-25 11:55:43.469042] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:44.673 [2024-07-25 11:55:43.469055] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:44.673 [2024-07-25 11:55:43.469066] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:44.673 [2024-07-25 11:55:43.469078] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:44.673 [2024-07-25 11:55:43.469091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.673 [2024-07-25 11:55:43.469103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:44.673 [2024-07-25 11:55:43.469115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.399 ms 00:30:44.673 [2024-07-25 11:55:43.469131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.673 [2024-07-25 11:55:43.469236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.673 [2024-07-25 11:55:43.469259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:44.673 [2024-07-25 11:55:43.469272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:30:44.673 [2024-07-25 11:55:43.469283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.673 [2024-07-25 11:55:43.469403] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:44.673 [2024-07-25 11:55:43.469421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:44.673 [2024-07-25 11:55:43.469434] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:44.673 [2024-07-25 11:55:43.469447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:44.673 [2024-07-25 11:55:43.469475] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:44.673 [2024-07-25 11:55:43.469497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:44.673 [2024-07-25 11:55:43.469509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:44.673 [2024-07-25 11:55:43.469520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:44.673 [2024-07-25 11:55:43.469541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:44.673 [2024-07-25 11:55:43.469551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:44.673 [2024-07-25 11:55:43.469573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:44.673 [2024-07-25 11:55:43.469595] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:44.673 [2024-07-25 11:55:43.469619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:44.673 [2024-07-25 11:55:43.469630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:44.673 [2024-07-25 11:55:43.469653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:44.673 [2024-07-25 11:55:43.469664] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:44.673 [2024-07-25 11:55:43.469675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:44.673 [2024-07-25 11:55:43.469686] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:44.673 [2024-07-25 11:55:43.469697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:44.673 [2024-07-25 11:55:43.469708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:44.673 [2024-07-25 11:55:43.469718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:44.673 [2024-07-25 11:55:43.469728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:44.673 [2024-07-25 11:55:43.469738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:44.673 [2024-07-25 11:55:43.469750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:44.673 [2024-07-25 11:55:43.469760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:44.673 [2024-07-25 11:55:43.469771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:44.673 [2024-07-25 11:55:43.469782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:44.673 [2024-07-25 11:55:43.469792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:44.673 [2024-07-25 11:55:43.469820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:44.673 [2024-07-25 11:55:43.469831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:44.673 [2024-07-25 11:55:43.469853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:44.673 [2024-07-25 11:55:43.469885] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:44.673 [2024-07-25 11:55:43.469896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.469906] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:44.673 [2024-07-25 11:55:43.470187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:44.673 [2024-07-25 11:55:43.470252] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:44.673 [2024-07-25 11:55:43.470380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.673 [2024-07-25 11:55:43.470439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:44.673 [2024-07-25 11:55:43.470481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:44.673 [2024-07-25 11:55:43.470583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:44.673 [2024-07-25 11:55:43.470707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:44.673 [2024-07-25 11:55:43.470775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:44.673 [2024-07-25 11:55:43.470967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:44.673 [2024-07-25 11:55:43.471023] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:44.673 [2024-07-25 11:55:43.471162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:44.673 [2024-07-25 11:55:43.471224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:44.673 [2024-07-25 11:55:43.471281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:44.673 [2024-07-25 11:55:43.471337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:44.673 [2024-07-25 11:55:43.471461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:44.673 [2024-07-25 11:55:43.471540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:44.673 [2024-07-25 11:55:43.471597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:44.673 [2024-07-25 11:55:43.471708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:44.673 [2024-07-25 11:55:43.471765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:44.673 [2024-07-25 11:55:43.471940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:44.673 [2024-07-25 11:55:43.472064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:44.674 [2024-07-25 11:55:43.472083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:44.674 [2024-07-25 11:55:43.472095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:44.674 [2024-07-25 11:55:43.472107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:44.674 [2024-07-25 11:55:43.472119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:44.674 [2024-07-25 11:55:43.472131] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:44.674 [2024-07-25 11:55:43.472145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:44.674 [2024-07-25 11:55:43.472157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:44.674 [2024-07-25 11:55:43.472169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:44.674 [2024-07-25 11:55:43.472184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:44.674 [2024-07-25 11:55:43.472196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:44.674 [2024-07-25 11:55:43.472217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.674 [2024-07-25 11:55:43.472230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:44.674 [2024-07-25 11:55:43.472243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.878 ms 00:30:44.674 [2024-07-25 11:55:43.472264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.674 [2024-07-25 11:55:43.472354] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:44.674 [2024-07-25 11:55:43.472376] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:47.204 [2024-07-25 11:55:46.056180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.056486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:47.204 [2024-07-25 11:55:46.056650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2583.832 ms 00:30:47.204 [2024-07-25 11:55:46.056792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.097058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.097407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:47.204 [2024-07-25 11:55:46.097577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.886 ms 00:30:47.204 [2024-07-25 11:55:46.097706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.097953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.098097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:47.204 [2024-07-25 11:55:46.098234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:47.204 [2024-07-25 11:55:46.098291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.142591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.142848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:47.204 [2024-07-25 11:55:46.143027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.106 ms 00:30:47.204 [2024-07-25 11:55:46.143159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.143300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.143429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:47.204 [2024-07-25 11:55:46.143547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:47.204 [2024-07-25 11:55:46.143599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.144356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.144521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:47.204 [2024-07-25 11:55:46.144637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.602 ms 00:30:47.204 [2024-07-25 11:55:46.144688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.144800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.144871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:47.204 [2024-07-25 11:55:46.144936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:30:47.204 [2024-07-25 11:55:46.144989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.166229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.166541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:47.204 [2024-07-25 11:55:46.166679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.173 ms 00:30:47.204 [2024-07-25 11:55:46.166741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.183952] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:47.204 [2024-07-25 11:55:46.184191] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:47.204 [2024-07-25 11:55:46.184221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.184236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:47.204 [2024-07-25 11:55:46.184253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.201 ms 00:30:47.204 [2024-07-25 11:55:46.184265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.202490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.202562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:47.204 [2024-07-25 11:55:46.202584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.134 ms 00:30:47.204 [2024-07-25 11:55:46.202604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.218404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.218473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:47.204 [2024-07-25 11:55:46.218495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.697 ms 00:30:47.204 [2024-07-25 11:55:46.218508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.234492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.234567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:47.204 [2024-07-25 11:55:46.234590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.898 ms 00:30:47.204 [2024-07-25 11:55:46.234603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.204 [2024-07-25 11:55:46.235705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.204 [2024-07-25 11:55:46.235737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:47.204 [2024-07-25 11:55:46.235760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.865 ms 00:30:47.204 [2024-07-25 11:55:46.235779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.479 [2024-07-25 11:55:46.328475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.479 [2024-07-25 11:55:46.328568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:47.479 [2024-07-25 11:55:46.328593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 92.658 ms 00:30:47.479 [2024-07-25 11:55:46.328606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.479 [2024-07-25 11:55:46.342904] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:47.479 [2024-07-25 11:55:46.344350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.479 [2024-07-25 11:55:46.344388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:47.479 [2024-07-25 11:55:46.344418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.631 ms 00:30:47.479 [2024-07-25 11:55:46.344450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.479 [2024-07-25 11:55:46.344620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.479 [2024-07-25 11:55:46.344644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:47.479 [2024-07-25 11:55:46.344659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:47.479 [2024-07-25 11:55:46.344672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.479 [2024-07-25 11:55:46.344766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.479 [2024-07-25 11:55:46.344787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:47.479 [2024-07-25 11:55:46.344802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:47.479 [2024-07-25 11:55:46.344821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.479 [2024-07-25 11:55:46.344875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.479 [2024-07-25 11:55:46.344890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:47.479 [2024-07-25 11:55:46.344903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:47.479 [2024-07-25 11:55:46.344916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.479 [2024-07-25 11:55:46.344994] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:47.479 [2024-07-25 11:55:46.345014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.479 [2024-07-25 11:55:46.345026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:47.479 [2024-07-25 11:55:46.345039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:47.479 [2024-07-25 11:55:46.345051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.479 [2024-07-25 11:55:46.379151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.479 [2024-07-25 11:55:46.379231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:47.479 [2024-07-25 11:55:46.379254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.057 ms 00:30:47.479 [2024-07-25 11:55:46.379268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.479 [2024-07-25 11:55:46.379385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.479 [2024-07-25 11:55:46.379406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:47.479 [2024-07-25 11:55:46.379427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:30:47.479 [2024-07-25 11:55:46.379449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.479 [2024-07-25 11:55:46.381152] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2952.007 ms, result 0 00:30:47.479 [2024-07-25 11:55:46.395675] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.479 [2024-07-25 11:55:46.411757] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:47.479 [2024-07-25 11:55:46.422050] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:47.479 11:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:47.479 11:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:30:47.479 11:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:47.479 11:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:47.479 11:55:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:47.746 [2024-07-25 11:55:46.730212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.746 [2024-07-25 11:55:46.730525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:47.746 [2024-07-25 11:55:46.730665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:47.746 [2024-07-25 11:55:46.730720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.746 [2024-07-25 11:55:46.730805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.746 [2024-07-25 11:55:46.730966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:47.746 [2024-07-25 11:55:46.731020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:47.746 [2024-07-25 11:55:46.731061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.746 [2024-07-25 11:55:46.731122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.746 [2024-07-25 11:55:46.731171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:47.746 [2024-07-25 11:55:46.731321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:47.746 [2024-07-25 11:55:46.731385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.746 [2024-07-25 11:55:46.731610] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.374 ms, result 0 00:30:47.746 true 00:30:47.746 11:55:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:48.005 { 00:30:48.005 "name": "ftl", 00:30:48.005 "properties": [ 00:30:48.005 { 00:30:48.005 "name": "superblock_version", 00:30:48.005 "value": 5, 00:30:48.005 "read-only": true 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "name": "base_device", 00:30:48.005 "bands": [ 00:30:48.005 { 00:30:48.005 "id": 0, 00:30:48.005 "state": "CLOSED", 00:30:48.005 "validity": 1.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 1, 00:30:48.005 "state": "CLOSED", 00:30:48.005 "validity": 1.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 2, 00:30:48.005 "state": "CLOSED", 00:30:48.005 "validity": 0.007843137254901933 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 3, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 4, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 5, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 6, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 7, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 8, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 9, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 10, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 11, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 12, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 13, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 14, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 15, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 16, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 17, 00:30:48.005 "state": "FREE", 00:30:48.005 "validity": 0.0 00:30:48.005 } 00:30:48.005 ], 00:30:48.005 "read-only": true 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "name": "cache_device", 00:30:48.005 "type": "bdev", 00:30:48.005 "chunks": [ 00:30:48.005 { 00:30:48.005 "id": 0, 00:30:48.005 "state": "INACTIVE", 00:30:48.005 "utilization": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 1, 00:30:48.005 "state": "OPEN", 00:30:48.005 "utilization": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 2, 00:30:48.005 "state": "OPEN", 00:30:48.005 "utilization": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 3, 00:30:48.005 "state": "FREE", 00:30:48.005 "utilization": 0.0 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "id": 4, 00:30:48.005 "state": "FREE", 00:30:48.005 "utilization": 0.0 00:30:48.005 } 00:30:48.005 ], 00:30:48.005 "read-only": true 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "name": "verbose_mode", 00:30:48.005 "value": true, 00:30:48.005 "unit": "", 00:30:48.005 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:48.005 }, 00:30:48.005 { 00:30:48.005 "name": "prep_upgrade_on_shutdown", 00:30:48.005 "value": false, 00:30:48.005 "unit": "", 00:30:48.005 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:48.005 } 00:30:48.005 ] 00:30:48.005 } 00:30:48.005 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:48.005 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:48.005 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:48.264 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:48.264 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:48.264 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:48.264 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:48.264 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:48.829 Validate MD5 checksum, iteration 1 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:48.829 11:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:48.829 [2024-07-25 11:55:47.679687] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:48.829 [2024-07-25 11:55:47.680035] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85922 ] 00:30:48.829 [2024-07-25 11:55:47.850470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.085 [2024-07-25 11:55:48.108878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.134  Copying: 491/1024 [MB] (491 MBps) Copying: 910/1024 [MB] (419 MBps) Copying: 1024/1024 [MB] (average 460 MBps) 00:30:54.134 00:30:54.134 11:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:54.134 11:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=996042ad09dfe99b9d31015b747318d2 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 996042ad09dfe99b9d31015b747318d2 != \9\9\6\0\4\2\a\d\0\9\d\f\e\9\9\b\9\d\3\1\0\1\5\b\7\4\7\3\1\8\d\2 ]] 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:56.686 Validate MD5 checksum, iteration 2 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:56.686 11:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:56.686 [2024-07-25 11:55:55.430366] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:56.686 [2024-07-25 11:55:55.430843] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85996 ] 00:30:56.686 [2024-07-25 11:55:55.598621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.944 [2024-07-25 11:55:55.884807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.062  Copying: 459/1024 [MB] (459 MBps) Copying: 952/1024 [MB] (493 MBps) Copying: 1024/1024 [MB] (average 477 MBps) 00:31:03.062 00:31:03.062 11:56:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:03.062 11:56:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0cf2b616595cf45ac64a3f0691fcc803 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0cf2b616595cf45ac64a3f0691fcc803 != \0\c\f\2\b\6\1\6\5\9\5\c\f\4\5\a\c\6\4\a\3\f\0\6\9\1\f\c\c\8\0\3 ]] 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85843 ]] 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85843 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:04.963 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:04.964 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86086 00:31:04.964 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:04.964 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86086 00:31:04.964 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 86086 ']' 00:31:04.964 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:04.964 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.964 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:04.964 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.964 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:04.964 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:05.222 [2024-07-25 11:56:04.037019] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:05.222 [2024-07-25 11:56:04.037227] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86086 ] 00:31:05.222 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 85843 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:05.222 [2024-07-25 11:56:04.208845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.481 [2024-07-25 11:56:04.459162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.416 [2024-07-25 11:56:05.315520] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:06.416 [2024-07-25 11:56:05.315617] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:06.416 [2024-07-25 11:56:05.464700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.416 [2024-07-25 11:56:05.464810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:06.416 [2024-07-25 11:56:05.464847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:06.416 [2024-07-25 11:56:05.464872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.416 [2024-07-25 11:56:05.465000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.416 [2024-07-25 11:56:05.465020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:06.416 [2024-07-25 11:56:05.465034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.099 ms 00:31:06.416 [2024-07-25 11:56:05.465046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.416 [2024-07-25 11:56:05.465084] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:06.416 [2024-07-25 11:56:05.466025] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:06.416 [2024-07-25 11:56:05.466052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.416 [2024-07-25 11:56:05.466064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:06.416 [2024-07-25 11:56:05.466077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.980 ms 00:31:06.416 [2024-07-25 11:56:05.466092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.416 [2024-07-25 11:56:05.466695] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:06.708 [2024-07-25 11:56:05.486469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.708 [2024-07-25 11:56:05.486537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:06.708 [2024-07-25 11:56:05.486582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.790 ms 00:31:06.708 [2024-07-25 11:56:05.486594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.708 [2024-07-25 11:56:05.497838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.708 [2024-07-25 11:56:05.497894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:06.708 [2024-07-25 11:56:05.497929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:31:06.708 [2024-07-25 11:56:05.497974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.708 [2024-07-25 11:56:05.498666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.708 [2024-07-25 11:56:05.498702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:06.708 [2024-07-25 11:56:05.498719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.517 ms 00:31:06.708 [2024-07-25 11:56:05.498731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.708 [2024-07-25 11:56:05.498868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.708 [2024-07-25 11:56:05.498887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:06.708 [2024-07-25 11:56:05.498900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:31:06.708 [2024-07-25 11:56:05.498911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.708 [2024-07-25 11:56:05.499142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.708 [2024-07-25 11:56:05.499218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:06.708 [2024-07-25 11:56:05.499274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:06.708 [2024-07-25 11:56:05.499311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.708 [2024-07-25 11:56:05.499383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:06.708 [2024-07-25 11:56:05.503063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.708 [2024-07-25 11:56:05.503102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:06.708 [2024-07-25 11:56:05.503133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.692 ms 00:31:06.708 [2024-07-25 11:56:05.503144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.708 [2024-07-25 11:56:05.503190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.708 [2024-07-25 11:56:05.503207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:06.708 [2024-07-25 11:56:05.503229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:06.708 [2024-07-25 11:56:05.503243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.708 [2024-07-25 11:56:05.503292] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:06.708 [2024-07-25 11:56:05.503325] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:06.708 [2024-07-25 11:56:05.503370] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:06.708 [2024-07-25 11:56:05.503390] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:31:06.708 [2024-07-25 11:56:05.503486] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:06.708 [2024-07-25 11:56:05.503501] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:06.708 [2024-07-25 11:56:05.503515] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:31:06.708 [2024-07-25 11:56:05.503530] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:06.708 [2024-07-25 11:56:05.503543] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:06.708 [2024-07-25 11:56:05.503559] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:06.708 [2024-07-25 11:56:05.503570] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:06.708 [2024-07-25 11:56:05.503581] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:06.708 [2024-07-25 11:56:05.503592] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:06.708 [2024-07-25 11:56:05.503608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.708 [2024-07-25 11:56:05.503619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:06.708 [2024-07-25 11:56:05.503630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:31:06.708 [2024-07-25 11:56:05.503641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.708 [2024-07-25 11:56:05.503758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.708 [2024-07-25 11:56:05.503773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:06.708 [2024-07-25 11:56:05.503790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:31:06.708 [2024-07-25 11:56:05.503801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.708 [2024-07-25 11:56:05.503918] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:06.708 [2024-07-25 11:56:05.503934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:06.708 [2024-07-25 11:56:05.503947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:06.708 [2024-07-25 11:56:05.503995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:06.708 [2024-07-25 11:56:05.504021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:06.708 [2024-07-25 11:56:05.504042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:06.708 [2024-07-25 11:56:05.504052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:06.708 [2024-07-25 11:56:05.504067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:06.708 [2024-07-25 11:56:05.504101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:06.708 [2024-07-25 11:56:05.504111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:06.708 [2024-07-25 11:56:05.504130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:06.708 [2024-07-25 11:56:05.504141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:06.708 [2024-07-25 11:56:05.504177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:06.708 [2024-07-25 11:56:05.504188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:06.708 [2024-07-25 11:56:05.504212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:06.708 [2024-07-25 11:56:05.504222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.708 [2024-07-25 11:56:05.504232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:06.708 [2024-07-25 11:56:05.504243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:06.708 [2024-07-25 11:56:05.504253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.708 [2024-07-25 11:56:05.504263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:06.708 [2024-07-25 11:56:05.504274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:06.708 [2024-07-25 11:56:05.504284] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.708 [2024-07-25 11:56:05.504294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:06.708 [2024-07-25 11:56:05.504319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:06.708 [2024-07-25 11:56:05.504329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.708 [2024-07-25 11:56:05.504338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:06.708 [2024-07-25 11:56:05.504349] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:06.708 [2024-07-25 11:56:05.504358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:06.708 [2024-07-25 11:56:05.504378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:06.708 [2024-07-25 11:56:05.504389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:06.708 [2024-07-25 11:56:05.504409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:06.708 [2024-07-25 11:56:05.504478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:06.708 [2024-07-25 11:56:05.504487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504497] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:06.708 [2024-07-25 11:56:05.504509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:06.708 [2024-07-25 11:56:05.504523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:06.708 [2024-07-25 11:56:05.504534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.708 [2024-07-25 11:56:05.504553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:06.708 [2024-07-25 11:56:05.504565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:06.708 [2024-07-25 11:56:05.504603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:06.708 [2024-07-25 11:56:05.504616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:06.709 [2024-07-25 11:56:05.504626] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:06.709 [2024-07-25 11:56:05.504636] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:06.709 [2024-07-25 11:56:05.504651] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:06.709 [2024-07-25 11:56:05.504665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:06.709 [2024-07-25 11:56:05.504688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:06.709 [2024-07-25 11:56:05.504736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:06.709 [2024-07-25 11:56:05.504747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:06.709 [2024-07-25 11:56:05.504773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:06.709 [2024-07-25 11:56:05.504784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:06.709 [2024-07-25 11:56:05.504860] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:06.709 [2024-07-25 11:56:05.504872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:06.709 [2024-07-25 11:56:05.504894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:06.709 [2024-07-25 11:56:05.504905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:06.709 [2024-07-25 11:56:05.504916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:06.709 [2024-07-25 11:56:05.504928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.504939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:06.709 [2024-07-25 11:56:05.504953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.068 ms 00:31:06.709 [2024-07-25 11:56:05.504964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.539127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.539192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:06.709 [2024-07-25 11:56:05.539230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.069 ms 00:31:06.709 [2024-07-25 11:56:05.539241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.539322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.539337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:06.709 [2024-07-25 11:56:05.539357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:06.709 [2024-07-25 11:56:05.539368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.582423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.582493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:06.709 [2024-07-25 11:56:05.582515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.951 ms 00:31:06.709 [2024-07-25 11:56:05.582528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.582632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.582650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:06.709 [2024-07-25 11:56:05.582664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:06.709 [2024-07-25 11:56:05.582676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.582924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.582944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:06.709 [2024-07-25 11:56:05.582957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.098 ms 00:31:06.709 [2024-07-25 11:56:05.582994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.583066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.583083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:06.709 [2024-07-25 11:56:05.583096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:06.709 [2024-07-25 11:56:05.583108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.605047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.605107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:06.709 [2024-07-25 11:56:05.605143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.906 ms 00:31:06.709 [2024-07-25 11:56:05.605156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.605463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.605488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:06.709 [2024-07-25 11:56:05.605503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:06.709 [2024-07-25 11:56:05.605519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.636160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.636249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:06.709 [2024-07-25 11:56:05.636289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.593 ms 00:31:06.709 [2024-07-25 11:56:05.636310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.648949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.648998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:06.709 [2024-07-25 11:56:05.649033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.785 ms 00:31:06.709 [2024-07-25 11:56:05.649044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.724802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.724878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:06.709 [2024-07-25 11:56:05.724916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 75.658 ms 00:31:06.709 [2024-07-25 11:56:05.724928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.725243] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:06.709 [2024-07-25 11:56:05.725460] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:06.709 [2024-07-25 11:56:05.725629] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:06.709 [2024-07-25 11:56:05.725801] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:06.709 [2024-07-25 11:56:05.725816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.725827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:06.709 [2024-07-25 11:56:05.725847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.750 ms 00:31:06.709 [2024-07-25 11:56:05.725859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.709 [2024-07-25 11:56:05.726017] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:06.709 [2024-07-25 11:56:05.726041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.709 [2024-07-25 11:56:05.726073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:06.709 [2024-07-25 11:56:05.726086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:06.709 [2024-07-25 11:56:05.726097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.002 [2024-07-25 11:56:05.745178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.002 [2024-07-25 11:56:05.745251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:07.002 [2024-07-25 11:56:05.745289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.045 ms 00:31:07.002 [2024-07-25 11:56:05.745301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.002 [2024-07-25 11:56:05.756643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.002 [2024-07-25 11:56:05.756692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:07.002 [2024-07-25 11:56:05.756875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:07.002 [2024-07-25 11:56:05.756887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.002 [2024-07-25 11:56:05.757296] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:07.568 [2024-07-25 11:56:06.353113] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:07.568 [2024-07-25 11:56:06.353330] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:08.135 [2024-07-25 11:56:06.942324] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:08.135 [2024-07-25 11:56:06.942501] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:08.135 [2024-07-25 11:56:06.942542] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:08.135 [2024-07-25 11:56:06.942592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.135 [2024-07-25 11:56:06.942631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:08.135 [2024-07-25 11:56:06.942667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1185.511 ms 00:31:08.135 [2024-07-25 11:56:06.942690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.135 [2024-07-25 11:56:06.942756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.135 [2024-07-25 11:56:06.942778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:08.135 [2024-07-25 11:56:06.942796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:08.135 [2024-07-25 11:56:06.942839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.135 [2024-07-25 11:56:06.959349] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:08.135 [2024-07-25 11:56:06.959771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.135 [2024-07-25 11:56:06.959811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:08.135 [2024-07-25 11:56:06.959836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.893 ms 00:31:08.135 [2024-07-25 11:56:06.959850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.135 [2024-07-25 11:56:06.960892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.135 [2024-07-25 11:56:06.961006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:08.135 [2024-07-25 11:56:06.961027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.705 ms 00:31:08.135 [2024-07-25 11:56:06.961050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.135 [2024-07-25 11:56:06.963655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.135 [2024-07-25 11:56:06.963697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:08.135 [2024-07-25 11:56:06.963715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.563 ms 00:31:08.135 [2024-07-25 11:56:06.963728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.135 [2024-07-25 11:56:06.963824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.135 [2024-07-25 11:56:06.963845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:08.135 [2024-07-25 11:56:06.963860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:08.135 [2024-07-25 11:56:06.963874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.135 [2024-07-25 11:56:06.964116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.135 [2024-07-25 11:56:06.964139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:08.135 [2024-07-25 11:56:06.964198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:31:08.135 [2024-07-25 11:56:06.964225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.135 [2024-07-25 11:56:06.964271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.135 [2024-07-25 11:56:06.964288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:08.135 [2024-07-25 11:56:06.964312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:08.135 [2024-07-25 11:56:06.964326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.136 [2024-07-25 11:56:06.964382] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:08.136 [2024-07-25 11:56:06.964404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.136 [2024-07-25 11:56:06.964424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:08.136 [2024-07-25 11:56:06.964482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:08.136 [2024-07-25 11:56:06.964497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.136 [2024-07-25 11:56:06.964584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.136 [2024-07-25 11:56:06.964616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:08.136 [2024-07-25 11:56:06.964634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:31:08.136 [2024-07-25 11:56:06.964648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.136 [2024-07-25 11:56:06.966413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1500.927 ms, result 0 00:31:08.136 [2024-07-25 11:56:06.980474] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.136 [2024-07-25 11:56:06.996532] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:08.136 [2024-07-25 11:56:07.007088] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:08.136 Validate MD5 checksum, iteration 1 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:08.136 11:56:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:08.136 [2024-07-25 11:56:07.143262] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:08.136 [2024-07-25 11:56:07.143745] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86125 ] 00:31:08.394 [2024-07-25 11:56:07.313514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.652 [2024-07-25 11:56:07.564665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.125  Copying: 414/1024 [MB] (414 MBps) Copying: 844/1024 [MB] (430 MBps) Copying: 1024/1024 [MB] (average 430 MBps) 00:31:13.125 00:31:13.125 11:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:13.125 11:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:15.656 Validate MD5 checksum, iteration 2 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=996042ad09dfe99b9d31015b747318d2 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 996042ad09dfe99b9d31015b747318d2 != \9\9\6\0\4\2\a\d\0\9\d\f\e\9\9\b\9\d\3\1\0\1\5\b\7\4\7\3\1\8\d\2 ]] 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:15.656 11:56:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:15.656 [2024-07-25 11:56:14.443502] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:15.656 [2024-07-25 11:56:14.444006] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86200 ] 00:31:15.656 [2024-07-25 11:56:14.614006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.914 [2024-07-25 11:56:14.896017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.763  Copying: 467/1024 [MB] (467 MBps) Copying: 906/1024 [MB] (439 MBps) Copying: 1024/1024 [MB] (average 447 MBps) 00:31:20.763 00:31:20.763 11:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:20.763 11:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0cf2b616595cf45ac64a3f0691fcc803 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0cf2b616595cf45ac64a3f0691fcc803 != \0\c\f\2\b\6\1\6\5\9\5\c\f\4\5\a\c\6\4\a\3\f\0\6\9\1\f\c\c\8\0\3 ]] 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86086 ]] 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86086 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 86086 ']' 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 86086 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:22.665 11:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86086 00:31:22.923 11:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:22.923 killing process with pid 86086 00:31:22.923 11:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:22.923 11:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86086' 00:31:22.923 11:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 86086 00:31:22.923 11:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 86086 00:31:23.862 [2024-07-25 11:56:22.742033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:23.862 [2024-07-25 11:56:22.760566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.862 [2024-07-25 11:56:22.760623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:23.862 [2024-07-25 11:56:22.760645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:23.862 [2024-07-25 11:56:22.760659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.862 [2024-07-25 11:56:22.760698] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:23.862 [2024-07-25 11:56:22.764422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.862 [2024-07-25 11:56:22.764481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:23.862 [2024-07-25 11:56:22.764498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.702 ms 00:31:23.862 [2024-07-25 11:56:22.764510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.862 [2024-07-25 11:56:22.764779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.862 [2024-07-25 11:56:22.764807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:23.862 [2024-07-25 11:56:22.764820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.220 ms 00:31:23.862 [2024-07-25 11:56:22.764833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.862 [2024-07-25 11:56:22.766192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.862 [2024-07-25 11:56:22.766234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:23.862 [2024-07-25 11:56:22.766258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.335 ms 00:31:23.862 [2024-07-25 11:56:22.766270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.862 [2024-07-25 11:56:22.767590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.862 [2024-07-25 11:56:22.767638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:23.862 [2024-07-25 11:56:22.767653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.277 ms 00:31:23.862 [2024-07-25 11:56:22.767676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.862 [2024-07-25 11:56:22.780486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.862 [2024-07-25 11:56:22.780543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:23.862 [2024-07-25 11:56:22.780578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.722 ms 00:31:23.862 [2024-07-25 11:56:22.780590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.862 [2024-07-25 11:56:22.787060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.862 [2024-07-25 11:56:22.787102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:23.862 [2024-07-25 11:56:22.787141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.426 ms 00:31:23.862 [2024-07-25 11:56:22.787152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.862 [2024-07-25 11:56:22.787252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.862 [2024-07-25 11:56:22.787270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:23.862 [2024-07-25 11:56:22.787289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:31:23.862 [2024-07-25 11:56:22.787300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.862 [2024-07-25 11:56:22.798751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.862 [2024-07-25 11:56:22.798791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:31:23.862 [2024-07-25 11:56:22.798822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.427 ms 00:31:23.862 [2024-07-25 11:56:22.798833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.862 [2024-07-25 11:56:22.810564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.862 [2024-07-25 11:56:22.810603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:31:23.862 [2024-07-25 11:56:22.810634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.691 ms 00:31:23.863 [2024-07-25 11:56:22.810644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.863 [2024-07-25 11:56:22.822142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.863 [2024-07-25 11:56:22.822179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:23.863 [2024-07-25 11:56:22.822211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.458 ms 00:31:23.863 [2024-07-25 11:56:22.822221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.863 [2024-07-25 11:56:22.833702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.863 [2024-07-25 11:56:22.833755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:23.863 [2024-07-25 11:56:22.833787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.400 ms 00:31:23.863 [2024-07-25 11:56:22.833799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.863 [2024-07-25 11:56:22.833849] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:23.863 [2024-07-25 11:56:22.833887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:23.863 [2024-07-25 11:56:22.833911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:23.863 [2024-07-25 11:56:22.833977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:23.863 [2024-07-25 11:56:22.834000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:23.863 [2024-07-25 11:56:22.834371] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:23.863 [2024-07-25 11:56:22.834391] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 290b9015-b66f-4a42-915e-8a7920490245 00:31:23.863 [2024-07-25 11:56:22.834413] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:23.863 [2024-07-25 11:56:22.834435] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:23.863 [2024-07-25 11:56:22.834456] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:23.863 [2024-07-25 11:56:22.834474] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:23.863 [2024-07-25 11:56:22.834492] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:23.863 [2024-07-25 11:56:22.834524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:23.863 [2024-07-25 11:56:22.834546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:23.863 [2024-07-25 11:56:22.834564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:23.863 [2024-07-25 11:56:22.834581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:23.863 [2024-07-25 11:56:22.834600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.863 [2024-07-25 11:56:22.834620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:23.863 [2024-07-25 11:56:22.834645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.754 ms 00:31:23.863 [2024-07-25 11:56:22.834666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.863 [2024-07-25 11:56:22.851344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.863 [2024-07-25 11:56:22.851417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:23.863 [2024-07-25 11:56:22.851455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.628 ms 00:31:23.863 [2024-07-25 11:56:22.851481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.863 [2024-07-25 11:56:22.852040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.863 [2024-07-25 11:56:22.852062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:23.863 [2024-07-25 11:56:22.852076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.472 ms 00:31:23.863 [2024-07-25 11:56:22.852088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.863 [2024-07-25 11:56:22.905237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.863 [2024-07-25 11:56:22.905320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:23.863 [2024-07-25 11:56:22.905364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.863 [2024-07-25 11:56:22.905377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.863 [2024-07-25 11:56:22.905507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.863 [2024-07-25 11:56:22.905524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:23.863 [2024-07-25 11:56:22.905536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.863 [2024-07-25 11:56:22.905579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.863 [2024-07-25 11:56:22.905725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.863 [2024-07-25 11:56:22.905746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:23.863 [2024-07-25 11:56:22.905760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.863 [2024-07-25 11:56:22.905779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.863 [2024-07-25 11:56:22.905808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.863 [2024-07-25 11:56:22.905824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:23.863 [2024-07-25 11:56:22.905837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.863 [2024-07-25 11:56:22.905849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.122 [2024-07-25 11:56:23.012913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.122 [2024-07-25 11:56:23.013011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:24.122 [2024-07-25 11:56:23.013044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.122 [2024-07-25 11:56:23.013057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.122 [2024-07-25 11:56:23.102873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.122 [2024-07-25 11:56:23.102976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:24.122 [2024-07-25 11:56:23.103000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.122 [2024-07-25 11:56:23.103024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.122 [2024-07-25 11:56:23.103199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.122 [2024-07-25 11:56:23.103222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:24.122 [2024-07-25 11:56:23.103235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.122 [2024-07-25 11:56:23.103248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.122 [2024-07-25 11:56:23.103323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.122 [2024-07-25 11:56:23.103341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:24.122 [2024-07-25 11:56:23.103355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.122 [2024-07-25 11:56:23.103367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.122 [2024-07-25 11:56:23.103498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.122 [2024-07-25 11:56:23.103525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:24.122 [2024-07-25 11:56:23.103539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.122 [2024-07-25 11:56:23.103551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.122 [2024-07-25 11:56:23.103615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.122 [2024-07-25 11:56:23.103640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:24.122 [2024-07-25 11:56:23.103653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.122 [2024-07-25 11:56:23.103665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.122 [2024-07-25 11:56:23.103718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.122 [2024-07-25 11:56:23.103734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:24.122 [2024-07-25 11:56:23.103747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.122 [2024-07-25 11:56:23.103759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.122 [2024-07-25 11:56:23.103870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.122 [2024-07-25 11:56:23.103888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:24.122 [2024-07-25 11:56:23.103901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.122 [2024-07-25 11:56:23.103913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.122 [2024-07-25 11:56:23.104110] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 343.488 ms, result 0 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:25.497 Remove shared memory files 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85843 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:25.497 ************************************ 00:31:25.497 END TEST ftl_upgrade_shutdown 00:31:25.497 ************************************ 00:31:25.497 00:31:25.497 real 1m41.406s 00:31:25.497 user 2m22.839s 00:31:25.497 sys 0m25.335s 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:25.497 11:56:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:25.497 11:56:24 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:25.497 11:56:24 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:25.497 11:56:24 ftl -- ftl/ftl.sh@14 -- # killprocess 78300 00:31:25.497 Process with pid 78300 is not found 00:31:25.497 11:56:24 ftl -- common/autotest_common.sh@950 -- # '[' -z 78300 ']' 00:31:25.497 11:56:24 ftl -- common/autotest_common.sh@954 -- # kill -0 78300 00:31:25.497 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78300) - No such process 00:31:25.497 11:56:24 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 78300 is not found' 00:31:25.497 11:56:24 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:25.497 11:56:24 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86335 00:31:25.497 11:56:24 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:25.497 11:56:24 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86335 00:31:25.497 11:56:24 ftl -- common/autotest_common.sh@831 -- # '[' -z 86335 ']' 00:31:25.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.497 11:56:24 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.497 11:56:24 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:25.497 11:56:24 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.497 11:56:24 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:25.497 11:56:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:25.757 [2024-07-25 11:56:24.595150] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:25.757 [2024-07-25 11:56:24.595348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86335 ] 00:31:25.757 [2024-07-25 11:56:24.765867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.015 [2024-07-25 11:56:25.012775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.950 11:56:25 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:26.950 11:56:25 ftl -- common/autotest_common.sh@864 -- # return 0 00:31:26.950 11:56:25 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:27.209 nvme0n1 00:31:27.209 11:56:26 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:27.209 11:56:26 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:27.209 11:56:26 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:27.469 11:56:26 ftl -- ftl/common.sh@28 -- # stores=0f37c6bd-d428-415d-b655-cd5e7bcc461f 00:31:27.469 11:56:26 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:27.469 11:56:26 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f37c6bd-d428-415d-b655-cd5e7bcc461f 00:31:27.727 11:56:26 ftl -- ftl/ftl.sh@23 -- # killprocess 86335 00:31:27.727 11:56:26 ftl -- common/autotest_common.sh@950 -- # '[' -z 86335 ']' 00:31:27.727 11:56:26 ftl -- common/autotest_common.sh@954 -- # kill -0 86335 00:31:27.727 11:56:26 ftl -- common/autotest_common.sh@955 -- # uname 00:31:27.727 11:56:26 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:27.727 11:56:26 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86335 00:31:27.727 killing process with pid 86335 00:31:27.727 11:56:26 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:27.727 11:56:26 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:27.727 11:56:26 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86335' 00:31:27.727 11:56:26 ftl -- common/autotest_common.sh@969 -- # kill 86335 00:31:27.727 11:56:26 ftl -- common/autotest_common.sh@974 -- # wait 86335 00:31:30.258 11:56:29 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:30.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:30.258 Waiting for block devices as requested 00:31:30.516 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.516 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.516 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.772 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:36.034 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:36.034 Remove shared memory files 00:31:36.034 11:56:34 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:36.034 11:56:34 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:36.034 11:56:34 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:36.034 11:56:34 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:36.034 11:56:34 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:36.034 11:56:34 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:36.034 11:56:34 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:36.034 00:31:36.034 real 12m15.569s 00:31:36.034 user 15m14.037s 00:31:36.034 sys 1m36.251s 00:31:36.034 11:56:34 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:36.034 11:56:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:36.034 ************************************ 00:31:36.034 END TEST ftl 00:31:36.034 ************************************ 00:31:36.034 11:56:34 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:36.034 11:56:34 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:36.034 11:56:34 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:36.034 11:56:34 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:31:36.034 11:56:34 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:36.034 11:56:34 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:36.034 11:56:34 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:36.034 11:56:34 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:31:36.034 11:56:34 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:31:36.034 11:56:34 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:31:36.034 11:56:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:36.034 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:31:36.034 11:56:34 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:31:36.034 11:56:34 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:36.034 11:56:34 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:36.034 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:31:37.038 INFO: APP EXITING 00:31:37.038 INFO: killing all VMs 00:31:37.038 INFO: killing vhost app 00:31:37.038 INFO: EXIT DONE 00:31:37.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:37.581 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:37.840 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:37.840 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:37.840 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:38.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:38.666 Cleaning 00:31:38.666 Removing: /var/run/dpdk/spdk0/config 00:31:38.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:38.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:38.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:38.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:38.666 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:38.666 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:38.666 Removing: /var/run/dpdk/spdk0 00:31:38.666 Removing: /var/run/dpdk/spdk_pid62094 00:31:38.666 Removing: /var/run/dpdk/spdk_pid62321 00:31:38.666 Removing: /var/run/dpdk/spdk_pid62542 00:31:38.666 Removing: /var/run/dpdk/spdk_pid62646 00:31:38.666 Removing: /var/run/dpdk/spdk_pid62702 00:31:38.666 Removing: /var/run/dpdk/spdk_pid62830 00:31:38.666 Removing: /var/run/dpdk/spdk_pid62854 00:31:38.666 Removing: /var/run/dpdk/spdk_pid63040 00:31:38.666 Removing: /var/run/dpdk/spdk_pid63143 00:31:38.666 Removing: /var/run/dpdk/spdk_pid63248 00:31:38.666 Removing: /var/run/dpdk/spdk_pid63362 00:31:38.666 Removing: /var/run/dpdk/spdk_pid63464 00:31:38.666 Removing: /var/run/dpdk/spdk_pid63510 00:31:38.666 Removing: /var/run/dpdk/spdk_pid63552 00:31:38.666 Removing: /var/run/dpdk/spdk_pid63620 00:31:38.666 Removing: /var/run/dpdk/spdk_pid63729 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64185 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64260 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64334 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64356 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64510 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64526 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64680 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64696 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64771 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64790 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64854 00:31:38.666 Removing: /var/run/dpdk/spdk_pid64878 00:31:38.666 Removing: /var/run/dpdk/spdk_pid65065 00:31:38.666 Removing: /var/run/dpdk/spdk_pid65107 00:31:38.666 Removing: /var/run/dpdk/spdk_pid65188 00:31:38.666 Removing: /var/run/dpdk/spdk_pid65366 00:31:38.666 Removing: /var/run/dpdk/spdk_pid65461 00:31:38.666 Removing: /var/run/dpdk/spdk_pid65503 00:31:38.666 Removing: /var/run/dpdk/spdk_pid65975 00:31:38.666 Removing: /var/run/dpdk/spdk_pid66083 00:31:38.666 Removing: /var/run/dpdk/spdk_pid66199 00:31:38.666 Removing: /var/run/dpdk/spdk_pid66252 00:31:38.666 Removing: /var/run/dpdk/spdk_pid66283 00:31:38.666 Removing: /var/run/dpdk/spdk_pid66364 00:31:38.666 Removing: /var/run/dpdk/spdk_pid67002 00:31:38.666 Removing: /var/run/dpdk/spdk_pid67050 00:31:38.666 Removing: /var/run/dpdk/spdk_pid67566 00:31:38.666 Removing: /var/run/dpdk/spdk_pid67676 00:31:38.666 Removing: /var/run/dpdk/spdk_pid67796 00:31:38.666 Removing: /var/run/dpdk/spdk_pid67856 00:31:38.666 Removing: /var/run/dpdk/spdk_pid67887 00:31:38.666 Removing: /var/run/dpdk/spdk_pid67918 00:31:38.666 Removing: /var/run/dpdk/spdk_pid69776 00:31:38.666 Removing: /var/run/dpdk/spdk_pid69924 00:31:38.667 Removing: /var/run/dpdk/spdk_pid69929 00:31:38.667 Removing: /var/run/dpdk/spdk_pid69941 00:31:38.667 Removing: /var/run/dpdk/spdk_pid69991 00:31:38.667 Removing: /var/run/dpdk/spdk_pid69995 00:31:38.667 Removing: /var/run/dpdk/spdk_pid70007 00:31:38.667 Removing: /var/run/dpdk/spdk_pid70052 00:31:38.667 Removing: /var/run/dpdk/spdk_pid70056 00:31:38.667 Removing: /var/run/dpdk/spdk_pid70073 00:31:38.667 Removing: /var/run/dpdk/spdk_pid70118 00:31:38.667 Removing: /var/run/dpdk/spdk_pid70122 00:31:38.667 Removing: /var/run/dpdk/spdk_pid70134 00:31:38.667 Removing: /var/run/dpdk/spdk_pid71497 00:31:38.667 Removing: /var/run/dpdk/spdk_pid71598 00:31:38.667 Removing: /var/run/dpdk/spdk_pid72998 00:31:38.667 Removing: /var/run/dpdk/spdk_pid74342 00:31:38.667 Removing: /var/run/dpdk/spdk_pid74479 00:31:38.667 Removing: /var/run/dpdk/spdk_pid74611 00:31:38.667 Removing: /var/run/dpdk/spdk_pid74738 00:31:38.667 Removing: /var/run/dpdk/spdk_pid74891 00:31:38.667 Removing: /var/run/dpdk/spdk_pid74971 00:31:38.667 Removing: /var/run/dpdk/spdk_pid75115 00:31:38.667 Removing: /var/run/dpdk/spdk_pid75487 00:31:38.667 Removing: /var/run/dpdk/spdk_pid75529 00:31:38.667 Removing: /var/run/dpdk/spdk_pid76012 00:31:38.667 Removing: /var/run/dpdk/spdk_pid76192 00:31:38.667 Removing: /var/run/dpdk/spdk_pid76305 00:31:38.667 Removing: /var/run/dpdk/spdk_pid76420 00:31:38.667 Removing: /var/run/dpdk/spdk_pid76487 00:31:38.667 Removing: /var/run/dpdk/spdk_pid76519 00:31:38.667 Removing: /var/run/dpdk/spdk_pid76826 00:31:38.667 Removing: /var/run/dpdk/spdk_pid76904 00:31:38.667 Removing: /var/run/dpdk/spdk_pid76981 00:31:38.667 Removing: /var/run/dpdk/spdk_pid77376 00:31:38.667 Removing: /var/run/dpdk/spdk_pid77522 00:31:38.667 Removing: /var/run/dpdk/spdk_pid78300 00:31:38.667 Removing: /var/run/dpdk/spdk_pid78442 00:31:38.667 Removing: /var/run/dpdk/spdk_pid78643 00:31:38.667 Removing: /var/run/dpdk/spdk_pid78751 00:31:38.667 Removing: /var/run/dpdk/spdk_pid79105 00:31:38.667 Removing: /var/run/dpdk/spdk_pid79374 00:31:38.667 Removing: /var/run/dpdk/spdk_pid79735 00:31:38.667 Removing: /var/run/dpdk/spdk_pid79940 00:31:38.667 Removing: /var/run/dpdk/spdk_pid80077 00:31:38.667 Removing: /var/run/dpdk/spdk_pid80147 00:31:38.667 Removing: /var/run/dpdk/spdk_pid80296 00:31:38.667 Removing: /var/run/dpdk/spdk_pid80327 00:31:38.667 Removing: /var/run/dpdk/spdk_pid80395 00:31:38.667 Removing: /var/run/dpdk/spdk_pid80598 00:31:38.667 Removing: /var/run/dpdk/spdk_pid80856 00:31:38.667 Removing: /var/run/dpdk/spdk_pid81269 00:31:38.667 Removing: /var/run/dpdk/spdk_pid81725 00:31:38.667 Removing: /var/run/dpdk/spdk_pid82151 00:31:38.667 Removing: /var/run/dpdk/spdk_pid82655 00:31:38.667 Removing: /var/run/dpdk/spdk_pid82794 00:31:38.667 Removing: /var/run/dpdk/spdk_pid82899 00:31:38.667 Removing: /var/run/dpdk/spdk_pid83613 00:31:38.667 Removing: /var/run/dpdk/spdk_pid83688 00:31:38.667 Removing: /var/run/dpdk/spdk_pid84204 00:31:38.667 Removing: /var/run/dpdk/spdk_pid84689 00:31:38.667 Removing: /var/run/dpdk/spdk_pid85215 00:31:38.667 Removing: /var/run/dpdk/spdk_pid85343 00:31:38.667 Removing: /var/run/dpdk/spdk_pid85396 00:31:38.667 Removing: /var/run/dpdk/spdk_pid85466 00:31:38.667 Removing: /var/run/dpdk/spdk_pid85534 00:31:38.667 Removing: /var/run/dpdk/spdk_pid85604 00:31:38.667 Removing: /var/run/dpdk/spdk_pid85843 00:31:38.667 Removing: /var/run/dpdk/spdk_pid85922 00:31:38.667 Removing: /var/run/dpdk/spdk_pid85996 00:31:38.667 Removing: /var/run/dpdk/spdk_pid86086 00:31:38.667 Removing: /var/run/dpdk/spdk_pid86125 00:31:38.667 Removing: /var/run/dpdk/spdk_pid86200 00:31:38.667 Removing: /var/run/dpdk/spdk_pid86335 00:31:38.667 Clean 00:31:38.926 11:56:37 -- common/autotest_common.sh@1451 -- # return 0 00:31:38.926 11:56:37 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:31:38.926 11:56:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:38.926 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:31:38.926 11:56:37 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:31:38.926 11:56:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:38.926 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:31:38.926 11:56:37 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:38.926 11:56:37 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:38.926 11:56:37 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:38.926 11:56:37 -- spdk/autotest.sh@395 -- # hash lcov 00:31:38.926 11:56:37 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:38.926 11:56:37 -- spdk/autotest.sh@397 -- # hostname 00:31:38.926 11:56:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:39.185 geninfo: WARNING: invalid characters removed from testname! 00:32:11.255 11:57:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:11.255 11:57:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:13.219 11:57:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:16.503 11:57:14 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:19.033 11:57:17 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:22.315 11:57:20 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:24.850 11:57:23 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:24.850 11:57:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:24.850 11:57:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:24.850 11:57:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.850 11:57:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.850 11:57:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.850 11:57:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.850 11:57:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.850 11:57:23 -- paths/export.sh@5 -- $ export PATH 00:32:24.850 11:57:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.850 11:57:23 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:24.850 11:57:23 -- common/autobuild_common.sh@447 -- $ date +%s 00:32:24.850 11:57:23 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721908643.XXXXXX 00:32:24.850 11:57:23 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721908643.GNyvRG 00:32:24.850 11:57:23 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:32:24.850 11:57:23 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:32:24.850 11:57:23 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:32:24.850 11:57:23 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:24.850 11:57:23 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:24.850 11:57:23 -- common/autobuild_common.sh@463 -- $ get_config_params 00:32:24.850 11:57:23 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:32:24.850 11:57:23 -- common/autotest_common.sh@10 -- $ set +x 00:32:24.850 11:57:23 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:32:24.850 11:57:23 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:32:24.850 11:57:23 -- pm/common@17 -- $ local monitor 00:32:24.850 11:57:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:24.850 11:57:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:24.850 11:57:23 -- pm/common@25 -- $ sleep 1 00:32:24.850 11:57:23 -- pm/common@21 -- $ date +%s 00:32:24.850 11:57:23 -- pm/common@21 -- $ date +%s 00:32:24.850 11:57:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721908643 00:32:24.850 11:57:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721908643 00:32:25.107 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721908643_collect-cpu-load.pm.log 00:32:25.107 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721908643_collect-vmstat.pm.log 00:32:26.039 11:57:24 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:32:26.039 11:57:24 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:26.039 11:57:24 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:26.039 11:57:24 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:26.039 11:57:24 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:26.040 11:57:24 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:26.040 11:57:24 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:26.040 11:57:24 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:26.040 11:57:24 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:26.040 11:57:24 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:26.040 11:57:24 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:26.040 11:57:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:26.040 11:57:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:26.040 11:57:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:26.040 11:57:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:26.040 11:57:24 -- pm/common@44 -- $ pid=87991 00:32:26.040 11:57:24 -- pm/common@50 -- $ kill -TERM 87991 00:32:26.040 11:57:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:26.040 11:57:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:26.040 11:57:24 -- pm/common@44 -- $ pid=87992 00:32:26.040 11:57:24 -- pm/common@50 -- $ kill -TERM 87992 00:32:26.040 + [[ -n 5306 ]] 00:32:26.040 + sudo kill 5306 00:32:26.050 [Pipeline] } 00:32:26.068 [Pipeline] // timeout 00:32:26.073 [Pipeline] } 00:32:26.092 [Pipeline] // stage 00:32:26.098 [Pipeline] } 00:32:26.115 [Pipeline] // catchError 00:32:26.125 [Pipeline] stage 00:32:26.127 [Pipeline] { (Stop VM) 00:32:26.141 [Pipeline] sh 00:32:26.473 + vagrant halt 00:32:29.751 ==> default: Halting domain... 00:32:36.362 [Pipeline] sh 00:32:36.641 + vagrant destroy -f 00:32:39.925 ==> default: Removing domain... 00:32:40.503 [Pipeline] sh 00:32:40.826 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:32:40.835 [Pipeline] } 00:32:40.852 [Pipeline] // stage 00:32:40.858 [Pipeline] } 00:32:40.874 [Pipeline] // dir 00:32:40.879 [Pipeline] } 00:32:40.896 [Pipeline] // wrap 00:32:40.902 [Pipeline] } 00:32:40.917 [Pipeline] // catchError 00:32:40.926 [Pipeline] stage 00:32:40.928 [Pipeline] { (Epilogue) 00:32:40.941 [Pipeline] sh 00:32:41.220 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:47.792 [Pipeline] catchError 00:32:47.794 [Pipeline] { 00:32:47.808 [Pipeline] sh 00:32:48.090 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:48.090 Artifacts sizes are good 00:32:48.099 [Pipeline] } 00:32:48.117 [Pipeline] // catchError 00:32:48.129 [Pipeline] archiveArtifacts 00:32:48.137 Archiving artifacts 00:32:48.279 [Pipeline] cleanWs 00:32:48.290 [WS-CLEANUP] Deleting project workspace... 00:32:48.290 [WS-CLEANUP] Deferred wipeout is used... 00:32:48.297 [WS-CLEANUP] done 00:32:48.299 [Pipeline] } 00:32:48.318 [Pipeline] // stage 00:32:48.324 [Pipeline] } 00:32:48.341 [Pipeline] // node 00:32:48.346 [Pipeline] End of Pipeline 00:32:48.381 Finished: SUCCESS